For entirely dma coherent architectures there is no requirement to ever
remap dma coherent allocation. Move all the remap and pool code under
IS_ENABLED() checks and drop the Kconfig dependency.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Inline __iommu_dma_mmap_pfn into the main function, and use the
fact that __iommu_dma_get_pages return NULL for remapped contigous
allocations to simplify the code flow a bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Inline __iommu_dma_get_sgtable_page into the main function, and use the
fact that __iommu_dma_get_pages return NULL for remapped contigous
allocations to simplify the code flow a bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
All the logic in iommu_dma_alloc that deals with page allocation from
the CMA or page allocators can be split into a self-contained helper,
and we can than map the result of that or the atomic pool allocation
with the iommu later. This also allows reusing __iommu_dma_free to
tear down the allocations and MMU mappings when the IOMMU mapping
fails.
Based on a patch from Robin Murphy.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Most importantly clear up the size / iosize confusion. Also rename addr
to cpu_addr to match the surrounding code and make the intention a little
more clear.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Most of it can double up to serve the failure cleanup path for
iommu_dma_alloc().
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Instead of having a separate code path for the non-blocking alloc_pages
and CMA allocations paths merge them into one. There is a slight
behavior change here in that we try the page allocator if CMA fails.
This matches what dma-direct and other iommu drivers do and will be
needed to use the dma-iommu code on architectures without DMA remapping
later on.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Always remapping CMA allocations was largely a bodge to keep the freeing
logic manageable when it was split between here and an arch wrapper. Now
that it's all together and streamlined, we can relax that limitation.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Shuffle around the self-contained atomic and non-contiguous cases to
return early and get out of the way of the CMA case that we're about to
work on next.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[hch: slight changes to the code flow]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The freeing logic was made particularly horrible by part of it being
opaque to the arch wrapper, which led to a lot of convoluted repetition
to ensure each path did everything in the right order. Now that it's
all private, we can pick apart and consolidate the logically-distinct
steps of freeing the IOMMU mapping, the underlying pages, and the CPU
remap (if necessary) into something much more manageable.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[various cosmetic changes to the code flow]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
We only have a single caller of this function left, so open code it there.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Move the call to dma_common_pages_remap into __iommu_dma_alloc and
rename it to iommu_dma_alloc_remap. This creates a self-contained
helper for remapped pages allocation and mapping.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Since we duplicate the find_vm_area() logic a few times in places where
we only care aboute the pages, factor out a helper to abstract it.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[hch: don't warn when not finding a region, as we'll rely on that later]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The remaining internal callsites don't care about having prototypes
compatible with the relevant dma_map_ops callbacks, so the extra
level of indirection just wastes space and complictaes things.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Most of the callers don't care, and the couple that do already have the
domain to hand for other reasons are in slow paths where the (trivial)
overhead of a repeated lookup will be utterly immaterial.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[hch: dropped the hunk touching iommu_dma_get_msi_page to avoid a
conflict with another series]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Moving this function up to its unmap counterpart helps to keep related
code together for the following changes.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
There is nothing really arm64 specific in the iommu_dma_ops
implementation, so move it to dma-iommu.c and keep a lot of symbols
self-contained. Note the implementation does depend on the
DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the
DMA_IOMMU support depend on it, but this will be relaxed soon.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
arch_dma_prep_coherent can handle physically contiguous ranges larger
than PAGE_SIZE just fine, which means we don't need a page-based
iterator.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
We now have a arch_dma_prep_coherent architecture hook that is used
for the generic DMA remap allocator, and we should use the same
interface for the dma-iommu code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Used by iommu.c before creating identity mappings for reserved
ranges to ensure dma-ops won't ever remap these ranges.
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Normally during iommu probing a device, a default doamin will
be allocated and attached to the device. The domain type of
the default domain is statically defined, which results in a
situation where the allocated default domain isn't suitable
for the device due to some limitations. We already have API
iommu_request_dm_for_dev() to replace a DMA domain with an
identity one. This adds iommu_request_dma_domain_for_dev()
to request a dma domain if an allocated identity domain isn't
suitable for the device in question.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Set the page walk snoop to the right bit, otherwise the domain
id field will be overlapped.
Reported-by: Dave Jiang <dave.jiang@intel.com>
Fixes: 6f7db75e1c ("iommu/vt-d: Add second level page table interface")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Lockdep debug reported lock inversion related with the iommu code
caused by dmar_insert_one_dev_info() grabbing the iommu->lock and
the device_domain_lock out of order versus the code path in
iommu_flush_dev_iotlb(). Expanding the scope of the iommu->lock and
reversing the order of lock acquisition fixes the issue.
[ 76.238180] dsa_bus wq0.0: dsa wq wq0.0 disabled
[ 76.248706]
[ 76.250486] ========================================================
[ 76.257113] WARNING: possible irq lock inversion dependency detected
[ 76.263736] 5.1.0-rc5+ #162 Not tainted
[ 76.267854] --------------------------------------------------------
[ 76.274485] systemd-journal/521 just changed the state of lock:
[ 76.280685] 0000000055b330f5 (device_domain_lock){..-.}, at: iommu_flush_dev_iotlb.part.63+0x29/0x90
[ 76.290099] but this lock took another, SOFTIRQ-unsafe lock in the past:
[ 76.297093] (&(&iommu->lock)->rlock){+.+.}
[ 76.297094]
[ 76.297094]
[ 76.297094] and interrupts could create inverse lock ordering between them.
[ 76.297094]
[ 76.314257]
[ 76.314257] other info that might help us debug this:
[ 76.321448] Possible interrupt unsafe locking scenario:
[ 76.321448]
[ 76.328907] CPU0 CPU1
[ 76.333777] ---- ----
[ 76.338642] lock(&(&iommu->lock)->rlock);
[ 76.343165] local_irq_disable();
[ 76.349422] lock(device_domain_lock);
[ 76.356116] lock(&(&iommu->lock)->rlock);
[ 76.363154] <Interrupt>
[ 76.366134] lock(device_domain_lock);
[ 76.370548]
[ 76.370548] *** DEADLOCK ***
Fixes: 745f2586e7 ("iommu/vt-d: Simplify function get_domain_for_dev()")
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The iommu_group_get_for_dev() will allocate a group for a
device if it isn't in any group. This isn't the use case
in iommu_request_dm_for_dev(). Let's use iommu_group_get()
instead.
Fixes: d290f1e70d ("iommu: Introduce iommu_request_dm_for_dev()")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
A DMAR table walk would typically follow the below process.
1. Bus number is used to index into root table which points to a context
table.
2. Device number and Function number are used together to index into
context table which then points to a pasid directory.
3. PASID[19:6] is used to index into PASID directory which points to a
PASID table.
4. PASID[5:0] is used to index into PASID table which points to all levels
of page tables.
Whenever a user opens the file
"/sys/kernel/debug/iommu/intel/dmar_translation_struct", the above
described DMAR table walk is performed and the contents of the table are
dumped into the file. The dump could be handy while dealing with devices
that use PASID.
Example of such dump:
cat /sys/kernel/debug/iommu/intel/dmar_translation_struct
(Please note that because of 80 char limit, entries that should have been
in the same line are broken into different lines)
IOMMU dmar0: Root Table Address: 0x436f7c000
B.D.F Root_entry Context_entry
PASID PASID_table_entry
00:0a.0 0x0000000000000000:0x000000044dd3f001 0x0000000000100000:0x0000000435460e1d
0 0x000000044d6e1089:0x0000000000000003:0x0000000000000001
00:0a.0 0x0000000000000000:0x000000044dd3f001 0x0000000000100000:0x0000000435460e1d
1 0x0000000000000049:0x0000000000000001:0x0000000003c0e001
Note that the above format is followed even for legacy DMAR table dump
which doesn't support PASID and hence in such cases PASID is defaulted to
-1 indicating that PASID and it's related fields are invalid.
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Sohil Mehta <sohil.mehta@intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
A scalable mode DMAR table walk would involve looking at bits in each stage
of walk, like,
1. Is PASID enabled in the context entry?
2. What's the size of PASID directory?
3. Is the PASID directory entry present?
4. Is the PASID table entry present?
5. Number of PASID table entries?
Hence, add these macros that will later be used during this walk.
Apart from adding new macros, move existing macros (like
pasid_pde_is_present(), get_pasid_table_from_pde() and pasid_supported())
to appropriate header files so that they could be reused.
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Sohil Mehta <sohil.mehta@intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Presently, "/sys/kernel/debug/iommu/intel/dmar_translation_struct" file
dumps DMAR tables in the below format
IOMMU dmar2: Root Table Address:4362cc000
Root Table Entries:
Bus: 0 H: 0 L: 4362f0001
Context Table Entries for Bus: 0
Entry B:D.F High Low
160 00:14.0 102 4362ef001
184 00:17.0 302 435ec4001
248 00:1f.0 202 436300001
This format has few short comings like
1. When extended for dumping scalable mode DMAR table it will quickly be
very clumsy, making it unreadable.
2. It has information like the Bus number and Entry which are basically
part of B:D.F, hence are a repetition and are not so useful.
So, change it to a new format which could be easily extended to dump
scalable mode DMAR table. The new format looks as below:
IOMMU dmar2: Root Table Address: 0x436f7d000
B.D.F Root_entry Context_entry
00:14.0 0x0000000000000000:0x0000000436fbd001 0x0000000000000102:0x0000000436fbc001
00:17.0 0x0000000000000000:0x0000000436fbd001 0x0000000000000302:0x0000000436af4001
00:1f.0 0x0000000000000000:0x0000000436fbd001 0x0000000000000202:0x0000000436fcd001
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Sohil Mehta <sohil.mehta@intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
So that all types are printed in the same format.
Fixes: c52c72d3de ("iommu: Add sysfs attribyte for domain type")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
We use RCU's for rarely updated lists like iommus, rmrr, atsr units.
I'm not sure why domain_remove_dev_info() in domain_exit() was surrounded
by rcu_read_lock. Lock was present before refactoring in d160aca527,
but it was related to rcu list, not domain_remove_dev_info function.
dmar_remove_one_dev_info() doesn't touch any of those lists, so it doesn't
require a lock. In fact it is called 6 times without it anyway.
Fixes: d160aca527 ("iommu/vt-d: Unify domain->iommu attach/detachment")
Signed-off-by: Lukasz Odzioba <lukasz.odzioba@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The variable npages is being initialized however this is never read and
later it is being reassigned to a new value. The initialization is
redundant and hence can be removed.
Addresses-Coverity: ("Unused Value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
If multiple devices try to bind to the same mm/PASID, we need to
set up first level PASID entries for all the devices. The current
code does not consider this case which results in failed DMA for
devices after the first bind.
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Reported-by: Mike Campin <mike.campin@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Add SPDX license identifiers to all Make/Kconfig files which:
- Have no license information of any form
These files fall under the project license, GPL v2 only. The resulting SPDX
license identifier is:
GPL-2.0-only
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add SPDX license identifiers to all files which:
- Have no license information of any form
- Have EXPORT_.*_SYMBOL_GPL inside which was used in the
initial scan/conversion to ignore the file
These files fall under the project license, GPL v2 only. The resulting SPDX
license identifier is:
GPL-2.0-only
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pull IRQ chip updates from Ingo Molnar:
"A late irqchips update:
- New TI INTR/INTA set of drivers
- Rewrite of the stm32mp1-exti driver as a platform driver
- Update the IOMMU MSI mapping API to be RT friendly
- A number of cleanups and other low impact fixes"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits)
iommu/dma-iommu: Remove iommu_dma_map_msi_msg()
irqchip/gic-v3-mbi: Don't map the MSI page in mbi_compose_m{b, s}i_msg()
irqchip/ls-scfg-msi: Don't map the MSI page in ls_scfg_msi_compose_msg()
irqchip/gic-v3-its: Don't map the MSI page in its_irq_compose_msi_msg()
irqchip/gicv2m: Don't map the MSI page in gicv2m_compose_msi_msg()
iommu/dma-iommu: Split iommu_dma_map_msi_msg() in two parts
genirq/msi: Add a new field in msi_desc to store an IOMMU cookie
arm64: arch_k3: Enable interrupt controller drivers
irqchip/ti-sci-inta: Add msi domain support
soc: ti: Add MSI domain bus support for Interrupt Aggregator
irqchip/ti-sci-inta: Add support for Interrupt Aggregator driver
dt-bindings: irqchip: Introduce TISCI Interrupt Aggregator bindings
irqchip/ti-sci-intr: Add support for Interrupt Router driver
dt-bindings: irqchip: Introduce TISCI Interrupt router bindings
gpio: thunderx: Use the default parent apis for {request,release}_resources
genirq: Introduce irq_chip_{request,release}_resource_parent() apis
firmware: ti_sci: Add helper apis to manage resources
firmware: ti_sci: Add RM mapping table for am654
firmware: ti_sci: Add support for IRQ management
firmware: ti_sci: Add support for RM core ops
...
Since commit dccd2304cc ("ARM: 7430/1: sizes.h: move from asm-generic
to <linux/sizes.h>"), <asm/sizes.h> and <asm-generic/sizes.h> are just
wrappers of <linux/sizes.h>.
This commit replaces all <asm/sizes.h> and <asm-generic/sizes.h> to
prepare for the removal.
Link: http://lkml.kernel.org/r/1553267665-27228-1-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Including:
- ATS support for ARM-SMMU-v3.
- AUX domain support in the IOMMU-API and the Intel VT-d driver.
This adds support for multiple DMA address spaces per
(PCI-)device. The use-case is to multiplex devices between
host and KVM guests in a more flexible way than supported by
SR-IOV.
- The Rest are smaller cleanups and fixes, two of which needed
to be reverted after testing in linux-next.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAlzZWPkACgkQK/BELZcB
GuPdRRAAj/RcgVn7fqmNDM02xe6C5PuwBGYkXnC+atDrTQWbFsM0JE3YTWEHJ+66
7RMoYaksRaSBsn3QuX3b6+g6E+exhGoQ0BfkmuF8StUXAsaxvzGxvuk+cP0o4/mK
pZkj3BddS4ycRqQPsVEbgJGRzL39dxWHe7p3/FfwgV+HzVonURFozU0HixLAoBhr
uS0LpBiG8uGCMvO6yhTmPmfrbsSAcMivb7LlmsaykXPhjBk7kSqNgHNNx5O+HC8m
XJdFatkxolkrN6A2FoHdP05sAXCv+uHbAGGGitYziRaXG7GBzm7Vc2LspJIml+y2
898+MiTH1M3P0WPyDa3cfcnRc2BBuJg56emad4CcfduM9sVXI0Ol6slNAYljnSYD
5A0CUxbrLxGUZaf6DAUJ9w5L+LhgEkXzKWEE9Nif46K4I1CFSt/d8nwB6Q5Oc/ie
GZwTICRkMwTeqOM/CTyvwJCCwZm47AVv3qwaI0z5oDplH/bbRmNEi5WFJsgcgOnd
GS5kmzjFBsljjDVWswgugdm7sdMSl7y88uQK9zUiG8fXgRiVUW/rENfZ1SMmVl1p
zBQDndZmtrHm5ybe/NAZ8vaJhk4i1F3rWT0hwRZZKGDIrd/C3egnNyYkc4XeTPGe
3il+dJleIIwOX5Fpa44XTV1rDuVOXpF5LS5NRLjhhd+XqbaXZFI=
=HLtu
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v5.2' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
- ATS support for ARM-SMMU-v3.
- AUX domain support in the IOMMU-API and the Intel VT-d driver. This
adds support for multiple DMA address spaces per (PCI-)device. The
use-case is to multiplex devices between host and KVM guests in a
more flexible way than supported by SR-IOV.
- the rest are smaller cleanups and fixes, two of which needed to be
reverted after testing in linux-next.
* tag 'iommu-updates-v5.2' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (45 commits)
Revert "iommu/amd: Flush not present cache in iommu_map_page"
Revert "iommu/amd: Remove the leftover of bypass support"
iommu/vt-d: Fix leak in intel_pasid_alloc_table on error path
iommu/vt-d: Make kernel parameter igfx_off work with vIOMMU
iommu/vt-d: Set intel_iommu_gfx_mapped correctly
iommu/amd: Flush not present cache in iommu_map_page
iommu/vt-d: Cleanup: no spaces at the start of a line
iommu/vt-d: Don't request page request irq under dmar_global_lock
iommu/vt-d: Use struct_size() helper
iommu/mediatek: Fix leaked of_node references
iommu/amd: Remove amd_iommu_pd_list
iommu/arm-smmu: Log CBFRSYNRA register on context fault
iommu/arm-smmu-v3: Don't disable SMMU in kdump kernel
iommu/arm-smmu-v3: Disable tagged pointers
iommu/arm-smmu-v3: Add support for PCI ATS
iommu/arm-smmu-v3: Link domains and devices
iommu/arm-smmu-v3: Add a master->domain pointer
iommu/arm-smmu-v3: Store SteamIDs in master
iommu/arm-smmu-v3: Rename arm_smmu_master_data to arm_smmu_master
ACPI/IORT: Check ATS capability in root complex nodes
...
This reverts commit 1a1079011d.
This commit caused a NULL-ptr deference bug and must be
reverted for now.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The dma_ranges list field of PCI host bridge structure has resource entries
in sorted order representing address ranges allowed for DMA transfers.
Process the list and reserve IOVA addresses that are not present in its
resource entries (ie DMA memory holes) to prevent allocating IOVA addresses
that cannot be accessed by PCI devices.
Based-on-a-patch-by: Oza Pawandeep <oza.oza@broadcom.com>
Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Oza Pawandeep <poza@codeaurora.org>
Acked-by: Robin Murphy <robin.murphy@arm.com>
This reverts commit 7a5dbf3ab2.
This commit not only removes the leftovers of bypass
support, it also mostly removes the checking of the return
value of the get_domain() function. This can lead to silent
data corruption bugs when a device is not attached to its
dma_ops domain and a DMA-API function is called for that
device.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
If alloc_pages_node() fails, pasid_table is leaked. Free it.
Fixes: cc580e4126 ("iommu/vt-d: Per PCI device pasid table interfaces")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The kernel parameter igfx_off is used by users to disable
DMA remapping for the Intel integrated graphic device. It
was designed for bare metal cases where a dedicated IOMMU
is used for graphic. This doesn't apply to virtual IOMMU
case where an include-all IOMMU is used. This makes the
kernel parameter work with virtual IOMMU as well.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Fixes: c0771df8d5 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The intel_iommu_gfx_mapped flag is exported by the Intel
IOMMU driver to indicate whether an IOMMU is used for the
graphic device. In a virtualized IOMMU environment (e.g.
QEMU), an include-all IOMMU is used for graphic device.
This flag is found to be clear even the IOMMU is used.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Reported-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fixes: c0771df8d5 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.")
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
check if there is a not-present cache present and flush it if there is.
Signed-off-by: Tom Murphy <tmurphy@arista.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Replace the whitespaces at the start of a line with tabs. No
functional changes.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
A recent change split iommu_dma_map_msi_msg() in two new functions. The
function was still implemented to avoid modifying all the callers at
once.
Now that all the callers have been reworked, iommu_dma_map_msi_msg() can
be removed.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
On RT, iommu_dma_map_msi_msg() may be called from non-preemptible
context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as
the function is using spin_lock (they can sleep on RT).
iommu_dma_map_msi_msg() is used to map the MSI page in the IOMMU PT
and update the MSI message with the IOVA.
Only the part to lookup for the MSI page requires to be called in
preemptible context. As the MSI page cannot change over the lifecycle
of the MSI interrupt, the lookup can be cached and re-used later on.
iomma_dma_map_msi_msg() is now split in two functions:
- iommu_dma_prepare_msi(): This function will prepare the mapping
in the IOMMU and store the cookie in the structure msi_desc. This
function should be called in preemptible context.
- iommu_dma_compose_msi_msg(): This function will update the MSI
message with the IOVA when the device is behind an IOMMU.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Use new helper pci_dev_id() to simplify the code.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Use new helper pci_dev_id() to simplify the code.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.
So, replace code of the following form:
size = sizeof(*info) + level * sizeof(info->path[0]);
with:
size = struct_size(info, path, level);
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The call to of_parse_phandle returns a node pointer with refcount
incremented thus it must be explicitly decremented after the last
usage.
581 static int mtk_iommu_probe(struct platform_device *pdev)
582 {
...
626 for (i = 0; i < larb_nr; i++) {
627 struct device_node *larbnode;
...
631 larbnode = of_parse_phandle(...);
632 if (!larbnode)
633 return -EINVAL;
634
635 if (!of_device_is_available(larbnode))
636 continue; ---> leaked here
637
...
643 if (!plarbdev)
644 return -EPROBE_DEFER; ---> leaked here
...
647 component_match_add_release(dev, &match, release_of,
648 compare_of, larbnode);
---> release_of will call of_node_put
649 }
...
650
Detected by coccinelle with the following warnings:
./drivers/iommu/mtk_iommu.c:644:3-9: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 631, but without a corresponding object release within this function.
Signed-off-by: Wen Yang <wen.yang99@zte.com.cn>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mediatek@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Matthias Brugger <mbrugger@suse.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This variable hold a global list of allocated protection
domains in the AMD IOMMU driver. By now this list is never
traversed anymore, so the list and the lock protecting it
can be removed.
Cc: Tom Murphy <tmurphy@arista.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
UAPI Changes:
- Document which feature flags belong to which command in virtio_gpu.h
- Make the FB_DAMAGE_CLIPS available for atomic userspace only, it's useless for legacy.
Cross-subsystem Changes:
- Add device tree bindings for lg,acx467akm-7 panel and ST-Ericsson Multi Channel Display Engine MCDE
- Add parameters to the device tree bindings for tfp410
- iommu/io-pgtable: Add ARM Mali midgard MMU page table format
- dma-buf: Only do a 64-bits seqno compare when driver explicitly asks for it, else wraparound.
- Use the 64-bits compare for dma-fence-chains
Core Changes:
- Make the fb conversion functions use __iomem dst.
- Rename drm_client_add to drm_client_register
- Move intel_fb_initial_config to core.
- Add a drm_gem_objects_lookup helper
- Add drm_gem_fence_array helpers, and use it in lima.
- Add drm_format_helper.c to kerneldoc.
Driver Changes:
- Add panfrost driver for mali midgard/bitfrost.
- Converts bochs to use the simple display type.
- Small fixes to sun4i, tinydrm, ti-fp410.
- Fid aspeed's Kconfig options.
- Make some symbols/functions static in lima, sun4i and meson.
- Add a driver for the lg,acx467akm-7 panel.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAly4N1oACgkQ/lWMcqZw
E8OJNQ//fk8+8TkupJTiBYjsAIbo4pRrWa29zQFhiqEhG2kpDESr1YjkeN3uQ+wg
R6laOUt1jMNiV/BRBUlsv95cbZ0eda3mDPyFOqni8qBoUf0o4NMgLhuLK6qPeMyN
76u/VZCwYl1AZGiyi33xMABWZ0WmRkwaGxG8gDW/pGU2klRX6T7XgZPDnXrX/jJC
xq9QUWsOKoBVXX6OtYQPiL6WslbHh97sPzzs5dljJuLOWIug6xFWarUYK6vPnzU1
HVNcu2/oS6VMCRAW+Ocf4dlcfIN7PvKL984AOcZH3SLB5qabhbjB14e10RivJGZx
O2yhqNsdF42HcvA08EnwzNvtNS9Rj/GNuw95KHEU+pKZGZ6dQo/fFivm2DoeOqub
piQlTjVqrHhpNhKg+h8Bd5jUQjx97TPy+PjFtjvCZznZpp8SK8T12yrN+MK4W1ml
vzMYSaMWiUYNbdixSQH0L90i555uMQgOXh53mKNEovPkh1SKlcsMbONuJIEphpne
jOT89O9AhtwGu4179cTHRPWpsDcWw/Uoji5wcWZkjeBWdwjwXDGXiIHePN1KAcw9
qBERJs+yVujgmAvjnJwbe78QlYB1+wgtPdvWAGR0gmu31J1WVL1ADMLFe0YQ5RIz
huXetCYJzYzv8PxWAPUkky/vUFq4EZQkEQFzGQPtrZje5sH2mko=
=7Hyr
-----END PGP SIGNATURE-----
Merge tag 'drm-misc-next-2019-04-18' of git://anongit.freedesktop.org/drm/drm-misc into drm-next
drm-misc-next for v5.2:
UAPI Changes:
- Document which feature flags belong to which command in virtio_gpu.h
- Make the FB_DAMAGE_CLIPS available for atomic userspace only, it's useless for legacy.
Cross-subsystem Changes:
- Add device tree bindings for lg,acx467akm-7 panel and ST-Ericsson Multi Channel Display Engine MCDE
- Add parameters to the device tree bindings for tfp410
- iommu/io-pgtable: Add ARM Mali midgard MMU page table format
- dma-buf: Only do a 64-bits seqno compare when driver explicitly asks for it, else wraparound.
- Use the 64-bits compare for dma-fence-chains
Core Changes:
- Make the fb conversion functions use __iomem dst.
- Rename drm_client_add to drm_client_register
- Move intel_fb_initial_config to core.
- Add a drm_gem_objects_lookup helper
- Add drm_gem_fence_array helpers, and use it in lima.
- Add drm_format_helper.c to kerneldoc.
Driver Changes:
- Add panfrost driver for mali midgard/bitfrost.
- Converts bochs to use the simple display type.
- Small fixes to sun4i, tinydrm, ti-fp410.
- Fid aspeed's Kconfig options.
- Make some symbols/functions static in lima, sun4i and meson.
- Add a driver for the lg,acx467akm-7 panel.
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/737ad994-213d-45b5-207a-b99d795acd21@linux.intel.com
Bits[15:0] in CBFRSYNRA register contain information about
StreamID of the incoming transaction that generated the
fault. Dump CBFRSYNRA register to get this info.
This is specially useful in a distributed SMMU architecture
where multiple masters are connected to the SMMU.
SID information helps to quickly identify the faulting
master device.
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Disabling the SMMU when probing from within a kdump kernel so that all
incoming transactions are terminated can prevent the core of the crashed
kernel from being transferred off the machine if all I/O devices are
behind the SMMU.
Instead, continue to probe the SMMU after it is disabled so that we can
reinitialise it entirely and re-attach the DMA masters as they are reset.
Since the kdump kernel may not have drivers for all of the active DMA
masters, we suppress fault reporting to avoid spamming the console and
swamping the IRQ threads.
Reported-by: "Leizhen (ThunderTown)" <thunder.leizhen@huawei.com>
Tested-by: "Leizhen (ThunderTown)" <thunder.leizhen@huawei.com>
Tested-by: Bhupesh Sharma <bhsharma@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM architecture has a "Top Byte Ignore" (TBI) option that makes the
MMU mask out bits [63:56] of an address, allowing a userspace application
to store data in its pointers. This option is incompatible with PCI ATS.
If TBI is enabled in the SMMU and userspace triggers DMA transactions on
tagged pointers, the endpoint might create ATC entries for addresses that
include a tag. Software would then have to send ATC invalidation packets
for each 255 possible alias of an address, or just wipe the whole address
space. This is not a viable option, so disable TBI.
The impact of this change is unclear, since there are very few users of
tagged pointers, much less SVA. But the requirement introduced by this
patch doesn't seem excessive: a userspace application using both tagged
pointers and SVA should now sanitize addresses (clear the tag) before
using them for device DMA.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
PCIe devices can implement their own TLB, named Address Translation Cache
(ATC). Enable Address Translation Service (ATS) for devices that support
it and send them invalidation requests whenever we invalidate the IOTLBs.
ATC invalidation is allowed to take up to 90 seconds, according to the
PCIe spec, so it is possible to get a SMMU command queue timeout during
normal operations. However we expect implementations to complete
invalidation in reasonable time.
We only enable ATS for "trusted" devices, and currently rely on the
pci_dev->untrusted bit. For ATS we have to trust that:
(a) The device doesn't issue "translated" memory requests for addresses
that weren't returned by the SMMU in a Translation Completion. In
particular, if we give control of a device or device partition to a VM
or userspace, software cannot program the device to access arbitrary
"translated" addresses.
(b) The device follows permissions granted by the SMMU in a Translation
Completion. If the device requested read+write permission and only
got read, then it doesn't write.
(c) The device doesn't send Translated transactions for an address that
was invalidated by an ATC invalidation.
Note that the PCIe specification explicitly requires all of these, so we
can assume that implementations will cleanly shield ATCs from software.
All ATS translated requests still go through the SMMU, to walk the stream
table and check that the device is actually allowed to send translated
requests.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When removing a mapping from a domain, we need to send an invalidation to
all devices that might have stored it in their Address Translation Cache
(ATC). In addition when updating the context descriptor of a live domain,
we'll need to send invalidations for all devices attached to it.
Maintain a list of devices in each domain, protected by a spinlock. It is
updated every time we attach or detach devices to and from domains.
It needs to be a spinlock because we'll invalidate ATC entries from
within hardirq-safe contexts, but it may be possible to relax the read
side with RCU later.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As we're going to track domain-master links more closely for ATS and CD
invalidation, add pointer to the attached domain in struct
arm_smmu_master. As a result, arm_smmu_strtab_ent is redundant and can be
removed.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Simplify the attach/detach code a bit by keeping a pointer to the stream
IDs in the master structure. Although not completely obvious here, it does
make the subsequent support for ATS, PRI and PASID a bit simpler.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arm_smmu_master_data structure already represents more than just the
firmware data associated to a master, and will be used extensively to
represent a device's state when implementing more SMMU features. Rename
the structure to arm_smmu_master.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARM Mali midgard GPU is similar to standard 64-bit stage 1 page tables, but
have a few differences. Add a new format type to represent the format. The
input address size is 48-bits and the output address size is 40-bits (and
possibly less?). Note that the later bifrost GPUs follow the standard
64-bit stage 1 format.
The differences in the format compared to 64-bit stage 1 format are:
The 3rd level page entry bits are 0x1 instead of 0x3 for page entries.
The access flags are not read-only and unprivileged, but read and write.
This is similar to stage 2 entries, but the memory attributes field matches
stage 1 being an index.
The nG bit is not set by the vendor driver. This one didn't seem to matter,
but we'll keep it aligned to the vendor driver.
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: iommu@lists.linux-foundation.org
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190409205427.6943-2-robh@kernel.org
By default, for performance consideration, Intel IOMMU
driver won't flush IOTLB immediately after a buffer is
unmapped. It schedules a thread and flushes IOTLB in a
batched mode. This isn't suitable for untrusted device
since it still can access the memory even if it isn't
supposed to do so.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Xu Pengfei <pengfei.xu@intel.com>
Tested-by: Mika Westerberg <mika.westerberg@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The exlcusion range limit register needs to contain the
base-address of the last page that is part of the range, as
bits 0-11 of this register are treated as 0xfff by the
hardware for comparisons.
So correctly set the exclusion range in the hardware to the
last page which is _in_ the range.
Fixes: b2026aa2dc ('x86, AMD IOMMU: add functions for programming IOMMU MMIO space')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The intel-iommu driver currently has a partial reimplementation
of the direct mapping code for devices that use pass through
mode. Replace that code with calls to the relevant dma_direct
routines at the highest level. This means we have exactly the
same behvior as the dma direct code itself, and can prepare for
eventually only attaching the intel_iommu ops to devices that
actually need dynamic iommu mappings.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Invert the return value to avoid double negatives, use a bool
instead of int as the return value, and reduce some indentation
after early returns.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The AMD iommu dma_ops are only attached on a per-device basis when an
actual translation is needed. Remove the leftover bypass support which
in parts was already broken (e.g. it always returns 0 from ->map_sg).
Use the opportunity to remove a few local variables and move assignments
into the declaration line where they were previously separated by the
bypass check.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Commit e5567f5f67 ("PCI/ATS: Add pci_prg_resp_pasid_required()
interface.") added a common interface to check the PASID bit in the PRI
capability. Use it in the AMD driver.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This adds support to return the default pasid associated with
an auxiliary domain. The PCI device which is bound with this
domain should use this value as the pasid for all DMA requests
of the subset of device which is isolated and protected with
this domain.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
When multiple domains per device has been enabled by the
device driver, the device will tag the default PASID for
the domain to all DMA traffics out of the subset of this
device; and the IOMMU should translate the DMA requests
in PASID granularity.
This adds the intel_iommu_aux_attach/detach_device() ops
to support managing PASID granular translation structures
when the device driver has enabled multiple domains per
device.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This part of code could be used by both normal and aux
domain specific attach entries. Hence move them into a
common function to avoid duplication.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This adds the iommu ops entries for aux-domain per-device
feature query and enable/disable.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This moves intel_iommu_enable_pasid() out of the scope of
CONFIG_INTEL_IOMMU_SVM with more and more features requiring
pasid function.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Add bind() and unbind() operations to the IOMMU API.
iommu_sva_bind_device() binds a device to an mm, and returns a handle to
the bond, which is released by calling iommu_sva_unbind_device().
Each mm bound to devices gets a PASID (by convention, a 20-bit system-wide
ID representing the address space), which can be retrieved with
iommu_sva_get_pasid(). When programming DMA addresses, device drivers
include this PASID in a device-specific manner, to let the device access
the given address space. Since the process memory may be paged out, device
and IOMMU must support I/O page faults (e.g. PCI PRI).
Using iommu_sva_set_ops(), device drivers provide an mm_exit() callback
that is called by the IOMMU driver if the process exits before the device
driver called unbind(). In mm_exit(), device driver should disable DMA
from the given context, so that the core IOMMU can reallocate the PASID.
Whether the process exited or nor, the device driver should always release
the handle with unbind().
To use these functions, device driver must first enable the
IOMMU_DEV_FEAT_SVA device feature with iommu_dev_enable_feature().
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Set PTE read/write attributes accordingly to the the protections requested
by IOMMU API.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Release all memory allocations associated with a released domain and emit
warning if domain is in-use at the time of destruction.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Both Tegra30 and Tegra114 have 4 ASID's and the corresponding bitfield of
the TLB_FLUSH register differs from later Tegra generations that have 128
ASID's.
In a result the PTE's are now flushed correctly from TLB and this fixes
problems with graphics (randomly failing tests) on Tegra30.
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
If you're bisecting why your peripherals stopped working, it's
probably this CL. Specifically if you see this in your dmesg:
Unexpected global fault, this could be serious
...then it's almost certainly this CL.
Running your IOMMU-enabled peripherals with the IOMMU in bypass mode
is insecure and effectively disables the protection they provide.
There are few reasons to allow unmatched stream bypass, and even fewer
good ones.
This patch starts the transition over to make it much harder to run
your system insecurely. Expected steps:
1. By default disable bypass (so anyone insecure will notice) but make
it easy for someone to re-enable bypass with just a KConfig change.
That's this patch.
2. After people have had a little time to come to grips with the fact
that they need to set their IOMMUs properly and have had time to
dig into how to do this, the KConfig will be eliminated and bypass
will simply be disabled. Folks who are truly upset and still
haven't fixed their system can either figure out how to add
'arm-smmu.disable_bypass=n' to their command line or revert the
patch in their own private kernel. Of course these folks will be
less secure.
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Marc Gonzalez <marc.w.gonzalez@free.fr>
Tested-by: Marc Gonzalez <marc.w.gonzalez@free.fr>
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Merge misc fixes from Andrew Morton:
"22 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (22 commits)
fs/proc/proc_sysctl.c: fix NULL pointer dereference in put_links
fs: fs_parser: fix printk format warning
checkpatch: add %pt as a valid vsprintf extension
mm/migrate.c: add missing flush_dcache_page for non-mapped page migrate
drivers/block/zram/zram_drv.c: fix idle/writeback string compare
mm/page_isolation.c: fix a wrong flag in set_migratetype_isolate()
mm/memory_hotplug.c: fix notification in offline error path
ptrace: take into account saved_sigmask in PTRACE{GET,SET}SIGMASK
fs/proc/kcore.c: make kcore_modules static
include/linux/list.h: fix list_is_first() kernel-doc
mm/debug.c: fix __dump_page when mapping->host is not set
mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified
include/linux/hugetlb.h: convert to use vm_fault_t
iommu/io-pgtable-arm-v7s: request DMA32 memory, and improve debugging
mm: add support for kmem caches in DMA32 zone
ocfs2: fix inode bh swapping mixup in ocfs2_reflink_inodes_lock
mm/hotplug: fix offline undo_isolate_page_range()
fs/open.c: allow opening only regular files during execve()
mailmap: add Changbin Du
mm/debug.c: add a cast to u64 for atomic64_read()
...
IOMMUs using ARMv7 short-descriptor format require page tables (level 1
and 2) to be allocated within the first 4GB of RAM, even on 64-bit
systems.
For level 1/2 pages, ensure GFP_DMA32 is used if CONFIG_ZONE_DMA32 is
defined (e.g. on arm64 platforms).
For level 2 pages, allocate a slab cache in SLAB_CACHE_DMA32. Note that
we do not explicitly pass GFP_DMA[32] to kmem_cache_zalloc, as this is
not strictly necessary, and would cause a warning in mm/sl*b.c, as we
did not update GFP_SLAB_BUG_MASK.
Also, print an error when the physical address does not fit in
32-bit, to make debugging easier in the future.
Link: http://lkml.kernel.org/r/20181210011504.122604-3-drinkcat@chromium.org
Fixes: ad67f5a654 ("arm64: replace ZONE_DMA with ZONE_DMA32")
Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Hsin-Yi Wang <hsinyi@chromium.org>
Cc: Huaisheng Ye <yehs1@lenovo.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Sasha Levin <Alexander.Levin@microsoft.com>
Cc: Tomasz Figa <tfiga@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yingjoe Chen <yingjoe.chen@mediatek.com>
Cc: Yong Wu <yong.wu@mediatek.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If a device has an exclusion range specified in the IVRS
table, this region needs to be reserved in the iova-domain
of that device. This hasn't happened until now and can cause
data corruption on data transfered with these devices.
Treat exclusion ranges as reserved regions in the iommu-core
to fix the problem.
Fixes: be2a022c0d ('x86, AMD IOMMU: add functions to parse IOMMU memory mapping requirements for devices')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Reviewed-by: Gary R Hook <gary.hook@amd.com>
The iommu_callback_data is not used anywhere, remove it to make
the code more concise.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Print the warning about the fall-back to IOMMU_DOMAIN_DMA in
iommu_group_get_for_dev() only when such a domain was
actually allocated.
Otherwise the user will get misleading warnings in the
kernel log when the iommu driver used doesn't support
IOMMU_DOMAIN_DMA and IOMMU_DOMAIN_IDENTITY.
Fixes: fccb4e3b8a ('iommu: Allow default domain type to be set on the kernel command line')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The driver sets a default domain id (FLPT_DEFAULT_DID) in the
first level only pasid entry, but saves a different domain id
in @sdev->did. The value saved in @sdev->did will be used to
invalidate the translation caches. Hence, the driver might
result in invalidating the caches with a wrong domain id.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Fixes: 1c4f88b7f1 ("iommu/vt-d: Shared virtual address in scalable mode")
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The spec states in 10.4.16 that the Protected Memory Enable
Register should be treated as read-only for implementations
not supporting protected memory regions (PLMR and PHMR fields
reported as Clear in the Capability register).
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: mark gross <mgross@intel.com>
Suggested-by: Ashok Raj <ashok.raj@intel.com>
Fixes: f8bab73515 ("intel-iommu: PMEN support")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
If a 32 bit allocation request is too big to possibly succeed, it
early exits with a failure and then should never update max32_alloc_
size. This patch fixes current code, now the size is only updated if
the slow path failed while walking the tree. Without the fix the
allocation may enter the slow path again even if there was a failure
before of a request with the same or a smaller size.
Cc: <stable@vger.kernel.org> # 4.20+
Fixes: bee60e94a1 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Robert Richter <rrichter@marvell.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Switch to bitmap_zalloc() to show clearly what we are allocating.
Besides that it returns pointer of bitmap type instead of opaque void *.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
- Fix a NULL-pointer dereference issue in the ACPI device
matching code of the AMD IOMMU driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAlyLxm0ACgkQK/BELZcB
GuM/1Q//VjiedvREnuin2vx+XXsK1qxFPDNFnUegFUI/DdhzRz3kFhmwHlA4/8+Q
AxIi3RQYHPBtOyAagcuWeSmjzef2mRLwFcADHhlCZYaBPx9E2rk455VXoCFVGIgj
s7gTtVcdDphxh6SUCbX9KIwET/go+WGw/tVYcDcCqtGONKSiOOmRLWqdrmvbKQwz
DOOIMYCZzL7t00oFhjZK/YMu8Yy8VgdeCkHRIQgJyrAxNq0CcJaRvTeA931UIkQr
RC/npS5p/GEXGQLExY8tWDq78LLRfSzovIS1URBfm8eRRfnLUUCdbn3r2CX7Oxnb
ouEteLKEeXPBuHQtlPYRxNWsCpOn99rMYkS7O6luwnBU6WPCvwiTgMb3AaI65QTV
kDMfSABYu1SJ3W/2MaH+Nk+5XGjgk3aWNkvU7s/LqDkcQ0tx2N98XQEyDNj9jY0W
lYyYQrpDEOYOgJZZ2ih/Vm1mUaOK1IolJOjFFIoZ9ov1jcbkwf+8dXaJ+JZaOKar
UPzJsYhA3DUJ5QzSf0tTviIy3lO2fIB08C3eL7ZWxQVLd3MaOR4ufUMZUvFcvfYK
MNwoWwiKghNBeiwjmR2VmUkOdYiWlsmq3sxiecV/NFZX76rMCt1neZTqyrOD/KV/
SiJbwYfVmDSlkokEfGrqNpfSmJsUYcDBurfokrdd+LInRSUjlAg=
=squ6
-----END PGP SIGNATURE-----
Merge tag 'iommu-fix-v5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU fix from Joerg Roedel:
"Fix a NULL-pointer dereference issue in the ACPI device matching code
of the AMD IOMMU driver"
* tag 'iommu-fix-v5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/amd: Fix NULL dereference bug in match_hid_uid
Add a non-NULL check to fix potential NULL pointer dereference
Cleanup code to call function once.
Signed-off-by: Aaron Ma <aaron.ma@canonical.com>
Fixes: 2bf9a0a127 ('iommu/amd: Add iommu support for ACPI HID devices')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Including:
- A big cleanup and optimization patch-set for the
Tegra GART driver
- Documentation updates and fixes for the IOMMU-API
- Support for page request in Intel VT-d scalable mode
- Intel VT-d dma_[un]map_resource() support
- Updates to the ATS enabling code for PCI (acked by Bjorn) and
Intel VT-d to align with the latest version of the ATS spec
- Relaxed IRQ source checking in the Intel VT-d driver for some
aliased devices, needed for future devices which send IRQ
messages from more than on request-ID
- IRQ remapping driver for Hyper-V
- Patches to make generic IOVA and IO-Page-Table code usable
outside of the IOMMU code
- Various other small fixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAlyCNlIACgkQK/BELZcB
GuNDiRAAscgYj0BdqpZVUNHl4PySR12QJpS1myl/OC4HEbdB/EOh+bYT4Q1vptCU
GNK6Gt9SVfcbtWrLiGfcP9ODXmbqZ6AIOIbHKv9cvw1mnyYAtVvT/kck7B/W5jEr
/aP/5RTO7XcqscWO44zBkrtLFupegtpQFB0jXYTJYTrwQoNKRqCUqfetZGzMkXjL
x/h7kFTTIRcVP8RFcOeAMwC6EieaI8z8HN976Gu7xSV8g0VJqoNsBN8jbUuBh5AN
oPyd9nl1KBcIQEC1HsbN8I5wIhTh1sJ2UDqFHAgtlnO59zWHORuFUUt6SXbC9UqJ
okJTzFp9Dh2BqmFPXxBTxAf3j+eJP2XPpDI9Ask6SytEPhgw39fdlOOn2MWfSFoW
TaBJ4ww/r98GzVxCP7Up98xFZuHGDICL3/M7Mk3mRac/lgbNRbtfcBa5NV4fyQhY
184t656Zm/9gdWgGAvYQtApr6/iI+wRMLkIwuw63wqH09yfbDcpTOo6DEQE3B5KR
4H1qSIiVGVVZlWQateR6N32ZmY4dWzpnL2b8CfsdBytzHHFb/c3dPnZB8fxx9mwF
onyvjg9nkIiv7mdcN4Ox2WXrAExTeSftyPajN0WWawNJU3uPTBgNrqNHyWSkiaN4
dAvEepfGuFQGz2Fj03Pv7OqY8veyRezErVRLwiMJRNyy7pi6Wng=
=cKsD
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
- A big cleanup and optimization patch-set for the Tegra GART driver
- Documentation updates and fixes for the IOMMU-API
- Support for page request in Intel VT-d scalable mode
- Intel VT-d dma_[un]map_resource() support
- Updates to the ATS enabling code for PCI (acked by Bjorn) and Intel
VT-d to align with the latest version of the ATS spec
- Relaxed IRQ source checking in the Intel VT-d driver for some aliased
devices, needed for future devices which send IRQ messages from more
than on request-ID
- IRQ remapping driver for Hyper-V
- Patches to make generic IOVA and IO-Page-Table code usable outside of
the IOMMU code
- Various other small fixes and cleanups
* tag 'iommu-updates-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (60 commits)
iommu/vt-d: Get domain ID before clear pasid entry
iommu/vt-d: Fix NULL pointer reference in intel_svm_bind_mm()
iommu/vt-d: Set context field after value initialized
iommu/vt-d: Disable ATS support on untrusted devices
iommu/mediatek: Fix semicolon code style issue
MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS scope
iommu/hyper-v: Add Hyper-V stub IOMMU driver
x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available
PCI/ATS: Add inline to pci_prg_resp_pasid_required()
iommu/vt-d: Check identity map for hot-added devices
iommu: Fix IOMMU debugfs fallout
iommu: Document iommu_ops.is_attach_deferred()
iommu: Document iommu_ops.iotlb_sync_map()
iommu/vt-d: Enable ATS only if the device uses page aligned address.
PCI/ATS: Add pci_ats_page_aligned() interface
iommu/vt-d: Fix PRI/PASID dependency issue.
PCI/ATS: Add pci_prg_resp_pasid_required() interface.
iommu/vt-d: Allow interrupts from the entire bus for aliased devices
iommu/vt-d: Add helper to set an IRTE to verify only the bus number
iommu: Fix flush_tlb_all typo
...
Here is the big driver core patchset for 5.1-rc1
More patches than "normal" here this merge window, due to some work in
the driver core by Alexander Duyck to rework the async probe
functionality to work better for a number of devices, and independant
work from Rafael for the device link functionality to make it work
"correctly".
Also in here is:
- lots of BUS_ATTR() removals, the macro is about to go away
- firmware test fixups
- ihex fixups and simplification
- component additions (also includes i915 patches)
- lots of minor coding style fixups and cleanups.
All of these have been in linux-next for a while with no reported
issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXH+euQ8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ynyTgCfbV8CLums843sBnT8NnWrTMTdTCcAn1K4re0m
ep8g+6oRLxJy414hogxQ
=bLs2
-----END PGP SIGNATURE-----
Merge tag 'driver-core-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core updates from Greg KH:
"Here is the big driver core patchset for 5.1-rc1
More patches than "normal" here this merge window, due to some work in
the driver core by Alexander Duyck to rework the async probe
functionality to work better for a number of devices, and independant
work from Rafael for the device link functionality to make it work
"correctly".
Also in here is:
- lots of BUS_ATTR() removals, the macro is about to go away
- firmware test fixups
- ihex fixups and simplification
- component additions (also includes i915 patches)
- lots of minor coding style fixups and cleanups.
All of these have been in linux-next for a while with no reported
issues"
* tag 'driver-core-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (65 commits)
driver core: platform: remove misleading err_alloc label
platform: set of_node in platform_device_register_full()
firmware: hardcode the debug message for -ENOENT
driver core: Add missing description of new struct device_link field
driver core: Fix PM-runtime for links added during consumer probe
drivers/component: kerneldoc polish
async: Add cmdline option to specify drivers to be async probed
driver core: Fix possible supplier PM-usage counter imbalance
PM-runtime: Fix __pm_runtime_set_status() race with runtime resume
driver: platform: Support parsing GpioInt 0 in platform_get_irq()
selftests: firmware: fix verify_reqs() return value
Revert "selftests: firmware: remove use of non-standard diff -Z option"
Revert "selftests: firmware: add CONFIG_FW_LOADER_USER_HELPER_FALLBACK to config"
device: Fix comment for driver_data in struct device
kernfs: Allocating memory for kernfs_iattrs with kmem_cache.
sysfs: remove unused include of kernfs-internal.h
driver core: Postpone DMA tear-down until after devres release
driver core: Document limitation related to DL_FLAG_RPM_ACTIVE
PM-runtime: Take suppliers into account in __pm_runtime_set_status()
device.h: Add __cold to dev_<level> logging functions
...
Patch series "Replace all open encodings for NUMA_NO_NODE", v3.
All these places for replacement were found by running the following
grep patterns on the entire kernel code. Please let me know if this
might have missed some instances. This might also have replaced some
false positives. I will appreciate suggestions, inputs and review.
1. git grep "nid == -1"
2. git grep "node == -1"
3. git grep "nid = -1"
4. git grep "node = -1"
This patch (of 2):
At present there are multiple places where invalid node number is
encoded as -1. Even though implicitly understood it is always better to
have macros in there. Replace these open encodings for an invalid node
number with the global macro NUMA_NO_NODE. This helps remove NUMA
related assumptions like 'invalid node' from various places redirecting
them to a common definition.
Link: http://lkml.kernel.org/r/1545127933-10711-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> [ixgbe]
Acked-by: Jens Axboe <axboe@kernel.dk> [mtip32xx]
Acked-by: Vinod Koul <vkoul@kernel.org> [dmaengine.c]
Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
Acked-by: Doug Ledford <dledford@redhat.com> [drivers/infiniband]
Cc: Joseph Qi <jiangqi903@gmail.com>
Cc: Hans Verkuil <hverkuil@xs4all.nl>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
One important patch:
- Fix for a memory corruption issue in the Intel VT-d driver
that triggers on hardware with deep PCI hierarchies
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJceWO/AAoJECvwRC2XARrjBtQQAJ5w8QuiuPsukHSE6Qt7OqAM
2W0B6J4Mb2LoY08Zfh0eG59WrJZU+vZSgi/NoKU7VJJFQmlD/Y+DMFdkzCpBfGXq
NP1lCvKlcAaKi81PnAr2LFR7jMr4n5j7kPl6DVizFuBztwZcoYXXesNcbemMJLfo
NMCYDlo8qyAf9+ETY6UKa5rbr6lJZCGodiNQazkAsVAAs5LNlYoXZKlD4/SYElmL
Kj7OFeKD2TgtmD1MKA6LdP3MfjP3HvI9nXO/+20LZdDxJoBIkD/4SWCrjxtzny84
z85ypxwGZahKRZwKNSvjKMNaQJfu/S9uiN2yx4IZI8prYG5Po7lB4bI/Ol420Ze5
oKdMjFQ4mfKswm3fBOwlMpCJr41jwY1oWjdtLwHNXX3iwCh7EoGTKzQpjqH7GAa2
iSzlVhQS/o2FS7OcklYtmHnOwgbM1t3J4r3viAPpyVjkQh1RCnw3RBAsxM6ta3Rn
BFVzoTdf3oDAyTVhBpfbPIGCmy3BD7KBanarYPXAKdSUn94UNj2qe6e1ePsGtESk
Xmj9eHxO2eJXo1CndWB+kElGrdGT0WuRE2kYI9A3PzNPgR504yqyPcTUaJ7A74l6
iubggAE6SY6QyLVTLUe25x9papjudLELlG+bAFKnjpQ9IxL34X2ExIZEYc9K9jsu
OrgXdHKFjnhHvlMUNs/b
=H9KF
-----END PGP SIGNATURE-----
Merge tag 'iommu-fix-v5.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU fix from Joerg Roedel:
"One important fix for a memory corruption issue in the Intel VT-d
driver that triggers on hardware with deep PCI hierarchies"
* tag 'iommu-fix-v5.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/dmar: Fix buffer overflow during PCI bus notification
After tearing down a pasid entry, the domain id is used to
invalidate the translation caches. Retrieve the domain id
from the pasid entry value before clearing the pasid entry.
Otherwise, we will always use domain id 0.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Fixes: 6f7db75e1c ("iommu/vt-d: Add second level page table interface")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Intel IOMMU could be turned off with intel_iommu=off. If Intel
IOMMU is off, the intel_iommu struct will not be initialized.
When device drivers call intel_svm_bind_mm(), the NULL pointer
reference will happen there.
Add dmar_disabled check to avoid NULL pointer reference.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Reported-by: Dave Jiang <dave.jiang@intel.com>
Fixes: 2f26e0a9c9 ("iommu/vt-d: Add basic SVM PASID support")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Otherwise, the translation type field of a context entry for
a PCI device will always be 0. All translated DMA requests
will be blocked by IOMMU. As the result, the PCI devices with
PCI ATS (device IOTBL) support won't work as expected.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Fixes: 7373a8cc38 ("iommu/vt-d: Setup context and enable RID2PASID support")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Commit fb58fdcd29 ("iommu/vt-d: Do not enable ATS for untrusted
devices") disables ATS support on the devices which have been marked
as untrusted. Unfortunately this is not enough to fix the DMA attack
vulnerabiltiies because IOMMU driver allows translated requests as
long as a device advertises the ATS capability. Hence a malicious
peripheral device could use this to bypass IOMMU.
This disables the ATS support on untrusted devices by clearing the
internal per-device ATS mark. As the result, IOMMU driver will block
any translated requests from any device marked as untrusted.
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Ashok Raj <ashok.raj@intel.com>
Fixes: fb58fdcd29 ("iommu/vt-d: Do not enable ATS for untrusted devices")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual processor index is more than 255. IO-APIC interrupt still has
8-bit APIC ID limitation.
This patch is to add Hyper-V stub IOMMU driver in order to enable
X2APIC mode successfully in Hyper-V Linux guest. The driver returns X2APIC
interrupt remapping capability when X2APIC mode is available. Otherwise,
it creates a Hyper-V irq domain to limit IO-APIC interrupts' affinity
and make sure cpus assigned with IO-APIC interrupt have 8-bit APIC ID.
Define 24 IO-APIC remapping entries because Hyper-V only expose one
single IO-APIC and one IO-APIC has 24 pins according IO-APIC spec(
https://pdos.csail.mit.edu/6.828/2016/readings/ia32/ioapic.pdf).
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The Intel IOMMU driver will put devices into a static identity
mapped domain during boot if the kernel parameter "iommu=pt" is
used. That means the IOMMU hardware will translate a DMA address
into the same memory address.
Unfortunately, hot-added devices are not subject to this. That
results in some devices not working properly after hot added. A
quick way to reproduce this issue is to boot a system with
iommu=pt
and, remove then readd the pci device with
echo 1 > /sys/bus/pci/devices/[pci_source_id]/remove
echo 1 > /sys/bus/pci/rescan
You will find the identity mapped domain was replaced with a
normal domain.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Reported-by: Jis Ben <jisben@google.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: James Dong <xmdong@google.com>
Fixes: 99dcadede4 ('intel-iommu: Support PCIe hot-plug')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Commit 57384592c4 ("iommu/vt-d: Store bus information in RMRR PCI
device path") changed the type of the path data, however, the change in
path type was not reflected in size calculations. Update to use the
correct type and prevent a buffer overflow.
This bug manifests in systems with deep PCI hierarchies, and can lead to
an overflow of the static allocated buffer (dmar_pci_notify_info_buf),
or can lead to overflow of slab-allocated data.
BUG: KASAN: global-out-of-bounds in dmar_alloc_pci_notify_info+0x1d5/0x2e0
Write of size 1 at addr ffffffff90445d80 by task swapper/0/1
CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.14.87-rt49-02406-gd0a0e96 #1
Call Trace:
? dump_stack+0x46/0x59
? print_address_description+0x1df/0x290
? dmar_alloc_pci_notify_info+0x1d5/0x2e0
? kasan_report+0x256/0x340
? dmar_alloc_pci_notify_info+0x1d5/0x2e0
? e820__memblock_setup+0xb0/0xb0
? dmar_dev_scope_init+0x424/0x48f
? __down_write_common+0x1ec/0x230
? dmar_dev_scope_init+0x48f/0x48f
? dmar_free_unused_resources+0x109/0x109
? cpumask_next+0x16/0x20
? __kmem_cache_create+0x392/0x430
? kmem_cache_create+0x135/0x2f0
? e820__memblock_setup+0xb0/0xb0
? intel_iommu_init+0x170/0x1848
? _raw_spin_unlock_irqrestore+0x32/0x60
? migrate_enable+0x27a/0x5b0
? sched_setattr+0x20/0x20
? migrate_disable+0x1fc/0x380
? task_rq_lock+0x170/0x170
? try_to_run_init_process+0x40/0x40
? locks_remove_file+0x85/0x2f0
? dev_prepare_static_identity_mapping+0x78/0x78
? rt_spin_unlock+0x39/0x50
? lockref_put_or_lock+0x2a/0x40
? dput+0x128/0x2f0
? __rcu_read_unlock+0x66/0x80
? __fput+0x250/0x300
? __rcu_read_lock+0x1b/0x30
? mntput_no_expire+0x38/0x290
? e820__memblock_setup+0xb0/0xb0
? pci_iommu_init+0x25/0x63
? pci_iommu_init+0x25/0x63
? do_one_initcall+0x7e/0x1c0
? initcall_blacklisted+0x120/0x120
? kernel_init_freeable+0x27b/0x307
? rest_init+0xd0/0xd0
? kernel_init+0xf/0x120
? rest_init+0xd0/0xd0
? ret_from_fork+0x1f/0x40
The buggy address belongs to the variable:
dmar_pci_notify_info_buf+0x40/0x60
Fixes: 57384592c4 ("iommu/vt-d: Store bus information in RMRR PCI device path")
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
A change made in the final version of IOMMU debugfs support replaced the
public function iommu_debugfs_new_driver_dir() by the public dentry
iommu_debugfs_dir in <linux/iommu.h>, but forgot to update both the
implementation in iommu-debugfs.c, and the patch description.
Fix this by exporting iommu_debugfs_dir, and removing the reference to
and implementation of iommu_debugfs_new_driver_dir().
Fixes: bad614b242 ("iommu: Enable debugfs exposure of IOMMU driver internals")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
As per Intel vt-d specification, Rev 3.0 (section 7.5.1.1, title "Page
Request Descriptor"), Intel IOMMU page request descriptor only uses
bits[63:12] of the page address. Hence Intel IOMMU driver would only
permit devices that advertise they would only send Page Aligned Requests
to participate in ATS service.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Keith Busch <keith.busch@intel.com>
Suggested-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
In Intel IOMMU, if the Page Request Queue (PRQ) is full, it will
automatically respond to the device with a success message as a keep
alive. And when sending the success message, IOMMU will include PASID in
the Response Message when the Page Request has a PASID in Request
Message and it does not check against the PRG Response PASID requirement
of the device before sending the response. Also, if the device receives
the PRG response with PASID when its not expecting it the device behavior
is undefined. So if PASID is enabled in the device, enable PRI only if
device expects PASID in PRG Response Message.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Keith Busch <keith.busch@intel.com>
Suggested-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
When a device has multiple aliases that all are from the same bus,
we program the IRTE to accept requests from any matching device on the
bus.
This is so NTB devices which can have requests from multiple bus-devfns
can pass MSI interrupts through across the bridge.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The current code uses set_irte_sid() with SVT_VERIFY_BUS and PCI_DEVID
to set the SID value. However, this is very confusing because, with
SVT_VERIFY_BUS, the SID value is not a PCI devfn address, but the start
and end bus numbers to match against.
According to the Intel Virtualization Technology for Directed I/O
Architecture Specification, Rev. 3.0, page 9-36:
The most significant 8-bits of the SID field contains the Startbus#,
and the least significant 8-bits of the SID field contains the Endbus#.
Interrupt requests that reference this IRTE must have a requester-id
whose bus# (most significant 8-bits of requester-id) has a value equal
to or within the Startbus# to Endbus# range.
So to make this more clear, introduce a new set_irte_verify_bus() that
explicitly takes a start bus and end bus so that we can stop abusing
the PCI_DEVID macro.
This helper function will be called a second time in an subsequent patch.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
L1 tables are allocated with __get_dma_pages, and therefore already
ignored by kmemleak.
Without this, the kernel would print this error message on boot,
when the first L1 table is allocated:
[ 2.810533] kmemleak: Trying to color unknown object at 0xffffffd652388000 as Black
[ 2.818190] CPU: 5 PID: 39 Comm: kworker/5:0 Tainted: G S 4.19.16 #8
[ 2.831227] Workqueue: events deferred_probe_work_func
[ 2.836353] Call trace:
...
[ 2.852532] paint_ptr+0xa0/0xa8
[ 2.855750] kmemleak_ignore+0x38/0x6c
[ 2.859490] __arm_v7s_alloc_table+0x168/0x1f4
[ 2.863922] arm_v7s_alloc_pgtable+0x114/0x17c
[ 2.868354] alloc_io_pgtable_ops+0x3c/0x78
...
Fixes: e5fc9753b1 ("iommu/io-pgtable: Add ARMv7 short descriptor support")
Signed-off-by: Nicolas Boichat <drinkcat@chromium.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The "Domain 0 is reserved, so dont process it" comment suggests that a NULL
pointer corresponds to domain 0. I don't think that's true, and in any
case, every caller supplies a non-NULL domain pointer that has already been
dereferenced, so the test is unnecessary.
Remove the test for a null "domain" pointer. No functional change
intended.
This null pointer check was added by 5e98c4b1d6 ("Allocation and free
functions of virtual machine domain").
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
domain_remove_dev_info() takes a struct dmar_domain * argument, but doesn't
use it. Remove it. No functional change intended.
The last use of this argument was removed by 127c761598 ("iommu/vt-d:
Pass device_domain_info to __dmar_remove_one_dev_info").
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
A local variable initialization is a hint that the variable will be used in
an unusual way. If the initialization is unnecessary, that hint becomes a
distraction.
Remove unnecessary initializations. No functional change intended.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Use dev_printk() when possible so the IOMMU messages are more consistent
with other messages related to the device.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Use dev_printk() when possible so the IOMMU messages are more consistent
with other messages related to the device.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Use dev_printk() when possible so the IOMMU messages are more consistent
with other messages related to the device.
E.g., I think these messages related to surprise hotplug:
pciehp 0000:80:10.0:pcie004: Slot(36): Link Down
iommu: Removing device 0000:87:00.0 from group 12
pciehp 0000:80:10.0:pcie004: Slot(36): Card present
pcieport 0000:80:10.0: Data Link Layer Link Active not set in 1000 msec
would be easier to read as these (also requires some PCI changes not
included here):
pci 0000:80:10.0: Slot(36): Link Down
pci 0000:87:00.0: Removing from iommu group 12
pci 0000:80:10.0: Slot(36): Card present
pci 0000:80:10.0: Data Link Layer Link Active not set in 1000 msec
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Move io-pgtable.h to include/linux/ and export alloc_io_pgtable_ops
and free_io_pgtable_ops. This enables drivers outside drivers/iommu/ to
use the page table library. Specifically, some ARM Mali GPUs use the
ARM page table formats.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: iommu@lists.linux-foundation.org
Cc: linux-mediatek@lists.infradead.org
Cc: linux-arm-msm@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The device links used by rockchip-iommu and exynos-iommu are
completely managed by these drivers within the IOMMU framework,
so there is no reason to involve the driver core in the management
of these links.
For this reason, make rockchip-iommu and exynos-iommu pass
DL_FLAG_STATELESS in flags to device_link_add(), so that the device
links used by them are stateless.
[Note that this change is requisite for a subsequent one that will
rework the management of stateful device links in the driver core
and it will not be compatible with the two drivers in question any
more.]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
A few more fixes this time:
- Two patches to fix the error path of the map_sg implementation
of the AMD IOMMU driver.
- Also a missing IOTLB flush is fixed in the AMD IOMMU driver.
- Memory leak fix for the Intel IOMMU driver.
- Fix a regression in the Mediatek IOMMU driver which caused
device initialization to fail (seen as broken HDMI output).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJcUbw6AAoJECvwRC2XARrjlhQP/1tvg9nam673Otx45FnmvKUk
7Bu5oLRXo67zBA9NqYZKaENFLTzb9TneyalSoiMwWfZTSaLTFgleieeT6iij1uU+
D4TEpXF7Jc87Zm7pPASuWHGEu3XR0dKja4pukVHnH0vRXlOhKsP6MrmEUj2+5ZrJ
RBXSX4a9Q6Ros2OxjnxJNxo8oekJQV0TiKtafzSUqPHnF4QLHLisuCe3z2DLwtsg
NHwis0Fgrb9ljM+pxEBYmeG9UXxfdvG2wlmYwrJvhoK+lmsjq1HjG5afxyMYvHSU
daK+mBvZ4HHLCe5oVY+BaMo8De1g1spqT2klWZecgr0FDXQdovdkYipSun6TZO/i
2dv8QvMkCwFwLfReJj1AV6qf83zR3Sn/rb4MKqo0/K9xlHc3WxVoN20Tcikwg6wN
5bPucgNkpavJxiODjfd6iiBC0K7SAOnvkiACySSXe5daL/Oi9c9q6izy7Z1z1D7q
UomvUCGyIj01drG+YC9m1eH4dqILTiDJGA5mrdtoAEDFYwYtp+354fF3u0x2sCsb
g87KV4RdAMuXRKWdxdsfw1BFNliHo4QcGDQk54bwN2t4X6hkOiq9jLMVcm4R+Fwy
IcCoS0BXVdbD0PZXeb2M4CHkxsV7AIU7Drj2/fb4pmjuMb22Z7228yRCsIIYzGcM
qq2AnNS1J0Z9BsxIItWO
=kSY5
-----END PGP SIGNATURE-----
Merge tag 'iommu-fixes-v5.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU fixes from Joerg Roedel:
"A few more fixes this time:
- Two patches to fix the error path of the map_sg implementation of
the AMD IOMMU driver.
- Also a missing IOTLB flush is fixed in the AMD IOMMU driver.
- Memory leak fix for the Intel IOMMU driver.
- Fix a regression in the Mediatek IOMMU driver which caused device
initialization to fail (seen as broken HDMI output)"
* tag 'iommu-fixes-v5.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/amd: Fix IOMMU page flush when detach device from a domain
iommu/mediatek: Use correct fwspec in mtk_iommu_add_device()
iommu/vt-d: Fix memory leak in intel_iommu_put_resv_regions()
iommu/amd: Unmap all mapped pages in error path of map_sg
iommu/amd: Call free_iova_fast with pfn in map_sg
The change_pte() interface is tailored for PFN updates, while the
other notifier invalidate_range() should be enough for Intel IOMMU
cache flushing. Actually we've done similar thing for AMD IOMMU
already in 8301da53fb ("iommu/amd: Remove change_pte mmu_notifier
call-back", 2014-07-30) but the Intel IOMMU driver still have it.
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
AMD IOMMU driver is using the clear_flush_young() to do cache flushing
but that's actually already covered by invalidate_range(). Remove the
extra notifier and the chunks.
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Since there are multiple possible failures in iommu_map_page
it would be useful to know which case is being hit when the
error message is printed in map_sg. While here, fix up checkpatch
complaint about using function name in a string instead of
__func__.
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Commit 765b6a98c1 ("iommu/vt-d: Enumerate the scalable
mode capability") enables VT-d scalable mode if hardware
advertises the capability. As we will bring up different
features and use cases to upstream in different patch
series, it will leave some intermediate kernel versions
which support partial features. Hence, end user might run
into problems when they use such kernels on bare metals
or virtualization environments.
This leaves scalable mode default off and end users could
turn it on with "intel-iommu=sm_on" only when they have
clear ideas about which scalable features are supported
in the kernel.
Cc: Liu Yi L <yi.l.liu@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Suggested-by: Ashok Raj <ashok.raj@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Currently the Intel IOMMU uses the default dma_[un]map_resource()
implementations does nothing and simply returns the physical address
unmodified.
However, this doesn't create the IOVA entries necessary for addresses
mapped this way to work when the IOMMU is enabled. Thus, when the
IOMMU is enabled, drivers relying on dma_map_resource() will trigger
DMAR errors. We see this when running ntb_transport with the IOMMU
enabled, DMA, and switchtec hardware.
The implementation for intel_map_resource() is nearly identical to
intel_map_page(), we just have to re-create __intel_map_single().
dma_unmap_resource() uses intel_unmap_page() directly as the
functions are identical.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
When a VM is terminated, the VFIO driver detaches all pass-through
devices from VFIO domain by clearing domain id and page table root
pointer from each device table entry (DTE), and then invalidates
the DTE. Then, the VFIO driver unmap pages and invalidate IOMMU pages.
Currently, the IOMMU driver keeps track of which IOMMU and how many
devices are attached to the domain. When invalidate IOMMU pages,
the driver checks if the IOMMU is still attached to the domain before
issuing the invalidate page command.
However, since VFIO has already detached all devices from the domain,
the subsequent INVALIDATE_IOMMU_PAGES commands are being skipped as
there is no IOMMU attached to the domain. This results in data
corruption and could cause the PCI device to end up in indeterministic
state.
Fix this by invalidate IOMMU pages when detach a device, and
before decrementing the per-domain device reference counts.
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Suggested-by: Joerg Roedel <joro@8bytes.org>
Co-developed-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Fixes: 6de8ad9b9e ('x86/amd-iommu: Make iommu_flush_pages aware of multiple IOMMUs')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
end_pfn is never used after commit <aa3ac9469c18> ('iommu/iova: Make dma
32bit pfn implicit'), cleanup it.
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The mtk_iommu_add_device() function keeps the fwspec in an
on-stack pointer and calls mtk_iommu_create_mapping(), which
might change its source, dev->iommu_fwspec. This causes the
on-stack pointer to be obsoleted and the device
initialization to fail. Update the on-stack fwspec pointer
after mtk_iommu_create_mapping() has been called.
Reported-by: Frank Wunderlich <frank-w@public-files.de>
Fixes: a9bf2eec5a ('iommu/mediatek: Use helper functions to access dev->iommu_fwspec')
Tested-by: Frank Wunderlich <frank-w@public-files.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Commit 9d3a4de4cb ("iommu: Disambiguate MSI region types") changed
the reserved region type in intel_iommu_get_resv_regions() from
IOMMU_RESV_RESERVED to IOMMU_RESV_MSI, but it forgot to also change
the type in intel_iommu_put_resv_regions().
This leads to a memory leak, because now the check in
intel_iommu_put_resv_regions() for IOMMU_RESV_RESERVED will never
be true, and no allocated regions will be freed.
Fix this by changing the region type in intel_iommu_put_resv_regions()
to IOMMU_RESV_MSI, matching the type of the allocated regions.
Fixes: 9d3a4de4cb ("iommu: Disambiguate MSI region types")
Cc: <stable@vger.kernel.org> # v4.11+
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
In the error path of map_sg there is an incorrect if condition
for breaking out of the loop that searches the scatterlist
for mapped pages to unmap. Instead of breaking out of the
loop once all the pages that were mapped have been unmapped,
it will break out of the loop after it has unmapped 1 page.
Fix the condition, so it breaks out of the loop only after
all the mapped pages have been unmapped.
Fixes: 80187fd39d ("iommu/amd: Optimize map_sg and unmap_sg")
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
In the error path of map_sg, free_iova_fast is being called with
address instead of the pfn. This results in a bad value getting into
the rcache, and can result in hitting a BUG_ON when
iova_magazine_free_pfns is called.
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Fixes: 80187fd39d ("iommu/amd: Optimize map_sg and unmap_sg")
Signed-off-by: Joerg Roedel <jroedel@suse.de>
One fix only for now:
- Fix probe deferral in iommu/of code (broke with recent changes
to iommu_ops->add_device invocation)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJcRZMlAAoJECvwRC2XARrjk2MQAIvf8Ze/T9xlCG+ouzdrnkfs
oUeta4cdrfsHc5kk+RPOHSQzPJxYeAGYFQHi2W0Vkh0f7vOj+Cwr/vaMbRQS5I/+
dfQSj7nwSmLEaGlC2OIzWmJ50Mj4L4M2rV5q8IXQqP5plPM22s7u5p9pJOiNIqhX
K/EBlA+movhnh3RqrzjJ1Vl5aKl0RQhrl1vJfxgTeoem42f9nWMEflifbddrQsbm
G12RQdjTVI7w/IVc+xwPPAimhZX2we9MowXmVB7zdVA1MUa3i1rwLYNIkdJ56lRN
F/OC/WyMIkln8SLP1A2VOb1M7Vkkao2gaSwLw3XP92oxzimQMHHTMalI1C2Pv4/e
yrrSPzg4A0QadeS68loEOIT/zneGfd4EX3P7J2DysSLrwqkiiE/Ivgn737TcsleW
xLnknIgkaTx3pxW7p2rZDWPQcQVAk60IOU4o6tSyCwfFVAk4teJxKZF98fyMhmf6
LR2GPYmXTAdc+s1TVTotYX9/g1InRcPbwOCbnlsPQ/iykYLlF3kn+fAJiliv6bNj
PZ39B0gLmKPF5jqvnxNeXy94nF5IgGek+tGdmR0j1Y0i3MuvJnqvq09NCnwxXY6I
SReu1R66C24m4uLkHATTovWBvGJ4sFBPzDl4OpbB248I4cboYcAfaxwbitc7JMf/
79KXXR3tiNUY1cDmt7u2
=+93K
-----END PGP SIGNATURE-----
Merge tag 'iommu-fixes-v5.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU fix from Joerg Roedel:
"One fix only for now: Fix probe deferral in iommu/of code (broke with
recent changes to iommu_ops->add_device invocation)"
* tag 'iommu-fixes-v5.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/of: Fix probe-deferral
Removed redundant safety-checks in the code and some debug code that
isn't actually very useful for debugging, like enormous pagetable dump
on each fault. The majority of the changes are code reshuffling,
variables/whitespaces clean up and removal of debug messages that
duplicate messages of the IOMMU-core. Now the GART translation is kept
disabled while GART is suspended.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
GART is a simple IOMMU provider that has single address space. There is
no need to setup global clients list and manage it for tracking of the
active domain, hence lot's of code could be safely removed and replaced
with a simpler alternative.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
There could be unlimited number of allocated domains, but only one domain
can be active at a time. Hence devices must be detached only from the
active domain.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
GART became a part of Memory Controller, hence now the drivers device
is Memory Controller and not GART. As a result all printed messages are
prepended with the "tegra-mc 7000f000.memory-controller:", so let's
prepend GART's messages with "gart:" in order to differentiate them
from the MC.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
GART is a part of the Memory Controller driver that is always built-in,
hence there is no benefit from the use of managed resources.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
GART has a single address space that is shared by all devices, hence only
one domain could be active at a time.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Fix NULL pointer dereference on IOMMU domain destruction that happens
because clients list is being iterated unsafely and its elements are
getting deleted during the iteration.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Fix spinlock recursion bug that happens on IOMMU domain destruction if
any of the allocated domains have devices attached to them.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>