Commit Graph

3044 Commits

Author SHA1 Message Date
Thierry Reding
a66c5dc549 iommu: arm: Use generic_iommu_put_resv_regions()
Use the new standard function instead of open-coding it.

Cc: Will Deacon <will@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:07:03 +01:00
Thierry Reding
f9f6971ebb iommu: Implement generic_iommu_put_resv_regions()
Implement a generic function for removing reserved regions. This can be
used by drivers that don't do anything fancy with these regions other
than allocating memory for them.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:07:03 +01:00
Qian Cai
944c917539 iommu/iova: Silence warnings under memory pressure
When running heavy memory pressure workloads, this 5+ old system is
throwing endless warnings below because disk IO is too slow to recover
from swapping. Since the volume from alloc_iova_fast() could be large,
once it calls printk(), it will trigger disk IO (writing to the log
files) and pending softirqs which could cause an infinite loop and make
no progress for days by the ongoimng memory reclaim. This is the counter
part for Intel where the AMD part has already been merged. See the
commit 3d70889532 ("iommu/amd: Silence warnings under memory
pressure"). Since the allocation failure will be reported in
intel_alloc_iova(), so just call dev_err_once() there because even the
"ratelimited" is too much, and silence the one in alloc_iova_mem() to
avoid the expensive warn_alloc().

 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 slab_out_of_memory: 66 callbacks suppressed
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
   cache: iommu_iova, object size: 40, buffer size: 448, default order:
0, min order: 0
   node 0: slabs: 1822, objs: 16398, free: 0
   node 1: slabs: 2051, objs: 18459, free: 31
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
   cache: iommu_iova, object size: 40, buffer size: 448, default order:
0, min order: 0
   node 0: slabs: 1822, objs: 16398, free: 0
   node 1: slabs: 2051, objs: 18459, free: 31
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
   cache: iommu_iova, object size: 40, buffer size: 448, default order:
0, min order: 0
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   node 0: slabs: 697, objs: 4182, free: 0
   node 0: slabs: 697, objs: 4182, free: 0
   node 0: slabs: 697, objs: 4182, free: 0
   node 0: slabs: 697, objs: 4182, free: 0
   node 1: slabs: 381, objs: 2286, free: 27
   node 1: slabs: 381, objs: 2286, free: 27
   node 1: slabs: 381, objs: 2286, free: 27
   node 1: slabs: 381, objs: 2286, free: 27
   node 0: slabs: 1822, objs: 16398, free: 0
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   node 1: slabs: 2051, objs: 18459, free: 31
   node 0: slabs: 697, objs: 4182, free: 0
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
   node 1: slabs: 381, objs: 2286, free: 27
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   node 0: slabs: 697, objs: 4182, free: 0
   node 1: slabs: 381, objs: 2286, free: 27
 hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
 warn_alloc: 96 callbacks suppressed
 kworker/11:1H: page allocation failure: order:0,
mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0-1
 CPU: 11 PID: 1642 Comm: kworker/11:1H Tainted: G    B
 Hardware name: HP ProLiant XL420 Gen9/ProLiant XL420 Gen9, BIOS U19
12/27/2015
 Workqueue: kblockd blk_mq_run_work_fn
 Call Trace:
  dump_stack+0xa0/0xea
  warn_alloc.cold.94+0x8a/0x12d
  __alloc_pages_slowpath+0x1750/0x1870
  __alloc_pages_nodemask+0x58a/0x710
  alloc_pages_current+0x9c/0x110
  alloc_slab_page+0xc9/0x760
  allocate_slab+0x48f/0x5d0
  new_slab+0x46/0x70
  ___slab_alloc+0x4ab/0x7b0
  __slab_alloc+0x43/0x70
  kmem_cache_alloc+0x2dd/0x450
 SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
  alloc_iova+0x33/0x210
   cache: skbuff_head_cache, object size: 208, buffer size: 640, default
order: 0, min order: 0
   node 0: slabs: 697, objs: 4182, free: 0
  alloc_iova_fast+0x62/0x3d1
   node 1: slabs: 381, objs: 2286, free: 27
  intel_alloc_iova+0xce/0xe0
  intel_map_sg+0xed/0x410
  scsi_dma_map+0xd7/0x160
  scsi_queue_rq+0xbf7/0x1310
  blk_mq_dispatch_rq_list+0x4d9/0xbc0
  blk_mq_sched_dispatch_requests+0x24a/0x300
  __blk_mq_run_hw_queue+0x156/0x230
  blk_mq_run_work_fn+0x3b/0x40
  process_one_work+0x579/0xb90
  worker_thread+0x63/0x5b0
  kthread+0x1e6/0x210
  ret_from_fork+0x3a/0x50
 Mem-Info:
 active_anon:2422723 inactive_anon:361971 isolated_anon:34403
  active_file:2285 inactive_file:1838 isolated_file:0
  unevictable:0 dirty:1 writeback:5 unstable:0
  slab_reclaimable:13972 slab_unreclaimable:453879
  mapped:2380 shmem:154 pagetables:6948 bounce:0
  free:19133 free_pcp:7363 free_cma:0

Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:07:03 +01:00
Krzysztof Kozlowski
d0432345b4 iommu: Fix Kconfig indentation
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
	$ sed -e 's/^        /\t/' -i */Kconfig

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:26 +01:00
Suravee Suthikulpanit
966b753cf3 iommu/amd: Only support x2APIC with IVHD type 11h/40h
Current implementation for IOMMU x2APIC support makes use of
the MMIO access to MSI capability block registers, which requires
checking EFR[MsiCapMmioSup]. However, only IVHD type 11h/40h contain
the information, and not in the IVHD type 10h IOMMU feature reporting
field. Since the BIOS in newer systems, which supports x2APIC, would
normally contain IVHD type 11h/40h, remove the IOMMU_FEAT_XTSUP_SHIFT
check for IVHD type 10h, and only support x2APIC with IVHD type 11h/40h.

Fixes: 6692981295 ('iommu/amd: Add support for X2APIC IOMMU interrupts')
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:15 +01:00
Suravee Suthikulpanit
813071438e iommu/amd: Check feature support bit before accessing MSI capability registers
The IOMMU MMIO access to MSI capability registers is available only if
the EFR[MsiCapMmioSup] is set. Current implementation assumes this bit
is set if the EFR[XtSup] is set, which might not be the case.

Fix by checking the EFR[MsiCapMmioSup] before accessing the MSI address
low/high and MSI data registers via the MMIO.

Fixes: 6692981295 ('iommu/amd: Add support for X2APIC IOMMU interrupts')
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:15 +01:00
Adrian Huang
387caf0b75 iommu/amd: Treat per-device exclusion ranges as r/w unity-mapped regions
Some buggy BIOSes might define multiple exclusion ranges of the
IVMD entries which are associated with the same IOMMU hardware.
This leads to the overwritten exclusion range (exclusion_start
and exclusion_length members) in set_device_exclusion_range().

Here is a real case:
When attaching two Broadcom RAID controllers to a server, the first
one reports the failure during booting (the disks connecting to the
RAID controller cannot be detected).

This patch prevents the issue by treating per-device exclusion
ranges as r/w unity-mapped regions.

Discussion:
  * https://lists.linuxfoundation.org/pipermail/iommu/2019-November/040140.html

Suggested-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Adrian Huang <ahuang12@lenovo.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:15 +01:00
Will Deacon
1ea27ee2f7 iommu/arm-smmu: Update my email address in MODULE_AUTHOR()
I no longer work for Arm, so update the stale reference to my old email
address.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:06 +01:00
Will Deacon
cd221bd24f iommu/arm-smmu: Allow building as a module
By conditionally dropping support for the legacy binding and exporting
the newly introduced 'arm_smmu_impl_init()' function we can allow the
ARM SMMU driver to be built as a module.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
7359572e1a iommu/arm-smmu: Unregister IOMMU and bus ops on device removal
When removing the SMMU driver, we need to clear any state that we
registered during probe. This includes our bus ops, sysfs entries and
the IOMMU device registered for early firmware probing of masters.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
2852ad05e3 iommu/arm-smmu-v3: Allow building as a module
By removing the redundant call to 'pci_request_acs()' we can allow the
ARM SMMUv3 driver to be built as a module.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Ard Biesheuvel
d3daf66621 iommu/arm-smmu: Support SMMU module probing from the IORT
Add support for SMMU drivers built as modules to the ACPI/IORT device
probing path, by deferring the probe of the master if the SMMU driver is
known to exist but has not been loaded yet. Given that the IORT code
registers a platform device for each SMMU that it discovers, we can
easily trigger the udev based autoloading of the SMMU drivers by making
the platform device identifier part of the module alias.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: John Garry <john.garry@huawei.com> # only manual smmu ko loading
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
ab24677471 iommu/arm-smmu-v3: Unregister IOMMU and bus ops on device removal
When removing the SMMUv3 driver, we need to clear any state that we
registered during probe. This includes our bus ops, sysfs entries and
the IOMMU device registered for early firmware probing of masters.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
34debdca68 iommu/arm-smmu: Prevent forced unbinding of Arm SMMU drivers
Forcefully unbinding the Arm SMMU drivers is a pretty dangerous operation,
since it will likely lead to catastrophic failure for any DMA devices
mastering through the SMMU being unbound. When the driver then attempts
to "handle" the fatal faults, it's very easy to trip over dead data
structures, leading to use-after-free.

On John's machine, he reports that the machine was "unusable" due to
loss of the storage controller following a forced unbind of the SMMUv3
driver:

  | # cd ./bus/platform/drivers/arm-smmu-v3
  | # echo arm-smmu-v3.0.auto > unbind
  | hisi_sas_v2_hw HISI0162:01: CQE_AXI_W_ERR (0x800) found!
  | platform arm-smmu-v3.0.auto: CMD_SYNC timeout at 0x00000146
  | [hwprod 0x00000146, hwcons 0x00000000]

Prevent this forced unbinding of the drivers by setting "suppress_bind_attrs"
to true.

Link: https://lore.kernel.org/lkml/06dfd385-1af0-3106-4cc5-6a5b8e864759@huawei.com
Reported-by: John Garry <john.garry@huawei.com>
Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
b06c076ea9 Revert "iommu/arm-smmu: Make arm-smmu explicitly non-modular"
This reverts commit addb672f20.

Let's get the SMMU driver building as a module, which means putting
back some dead code that we used to carry.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
6e8fa7404c Revert "iommu/arm-smmu: Make arm-smmu-v3 explicitly non-modular"
This reverts commit c07b6426df.

Let's get the SMMUv3 driver building as a module, which means putting
back some dead code that we used to carry.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
4312cf7f16 drivers/iommu: Allow IOMMU bus ops to be unregistered
'bus_set_iommu()' allows IOMMU drivers to register their ops for a given
bus type. Unfortunately, it then doesn't allow them to be removed, which
is necessary for modular drivers to shutdown cleanly so that they can be
reloaded later on.

Allow 'bus_set_iommu()' to take a NULL 'ops' argument, which clear the
ops pointer for the selected bus_type.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
386dce2788 iommu/of: Take a ref to the IOMMU driver during ->of_xlate()
Ensure that we hold a reference to the IOMMU driver module while calling
the '->of_xlate()' callback during early device probing.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
25f003de98 drivers/iommu: Take a ref to the IOMMU driver prior to ->add_device()
To avoid accidental removal of an active IOMMU driver module, take a
reference to the driver module in 'iommu_probe_device()' immediately
prior to invoking the '->add_device()' callback and hold it until the
after the device has been removed by '->remove_device()'.

Suggested-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
6bf6c24720 iommu/of: Request ACS from the PCI core when configuring IOMMU linkage
To avoid having to export 'pci_request_acs()' to modular IOMMU drivers,
move the call into the 'of_dma_configure()' path in a similar manner to
the way in which ACS is configured when probing via ACPI/IORT.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Will Deacon
a7ba5c3d00 drivers/iommu: Export core IOMMU API symbols to permit modular drivers
Building IOMMU drivers as modules requires that the core IOMMU API
symbols are exported as GPL symbols.

Signed-off-by: Will Deacon <will@kernel.org>
Tested-by: John Garry <john.garry@huawei.com> # smmu v3
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-23 14:06:05 +01:00
Linus Torvalds
b371ddb94f IOMMU Fixes for Linux v5.5-rc2
Including:
 
 	- Fix kmemleak warning in IOVA code
 
 	- Fix compile warnings on ARM32/64 in dma-iommu code due to
 	  dma_mask type mismatches
 
 	- Make ISA reserved regions relaxable, so that VFIO can assign
 	  devices which have such regions defined
 
 	- Fix mapping errors resulting in IO page-faults in the VT-d
 	  driver
 
 	- Make sure direct mappings for a domain are created after the
 	  default domain is updated
 
 	- Map ISA reserved regions in the VT-d driver with correct
 	  permissions
 
 	- Remove unneeded check for PSI capability in the IOTLB flush
 	  code of the VT-d driver
 
 	- Lockdep fix iommu_dma_prepare_msi()
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAl38rzEACgkQK/BELZcB
 GuNPChAAzFdZw0GRphdnsrGog7vJICukFshPifLD8NeJXYzqLzRY89LT/sg4gZrZ
 K3Uibg8+0OmWl21JqAzDzXeHYUwDV0Xe/ygjeOdqFn3LY8zCo6UcY4OLCZ1az/XU
 om/yjTBgZjgBcUAxkzRJSdditQ2p7ItEa4dXnlpeCV07vQEmS/5x8JkNsea7CG2h
 bvBLYW5DpJ1LsJo1WjONHw0DvRkExQsXZaA3zj/6BzfQIXUnkF1Imkgr9gTbXXOl
 nGHHRLVtsFqv0U5JWz6fZh4/UgvInq45gZIkvvxQWAM/Kn9wxe2RKDwpQJ1wZ8wc
 S5fwSPa5g5k2X73BbEHx7AFYESpgCRFOeG74i9b7/DlzsbM+aTGPZ1/4kLt9fl+u
 +AOUV3l9/rqrrmeUEBF7F3kFC9/OL0KIT17xdJfQG1x3RBm9OHy1q0GQH4q8ZbWM
 aoWg3Ryc4uO/4Majm/kIjADKR0512LvplXsRXhWpud37szhL6vMJDxBb1zKZJgQ1
 j/PFUWgolCvmSG1Q048I9pljrsqfE9pgQhmITQ9VAny6eAaZgT7Y21MbBTyQksem
 /O08TWGAFddH4U9pGnQ1ST/q5hcVvnUgzy12A3MOuOYh5tWfAbeZparjQsu7bFhp
 uaJudGyXg9Xgrg82XQ2x/Nk710wLRAGG4VKnUz+mBDEzOOmcems=
 =VWME
 -----END PGP SIGNATURE-----

Merge tag 'iommu-fixes-v5.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu fixes from Joerg Roedel:

 - Fix kmemleak warning in IOVA code

 - Fix compile warnings on ARM32/64 in dma-iommu code due to dma_mask
   type mismatches

 - Make ISA reserved regions relaxable, so that VFIO can assign devices
   which have such regions defined

 - Fix mapping errors resulting in IO page-faults in the VT-d driver

 - Make sure direct mappings for a domain are created after the default
   domain is updated

 - Map ISA reserved regions in the VT-d driver with correct permissions

 - Remove unneeded check for PSI capability in the IOTLB flush code of
   the VT-d driver

 - Lockdep fix iommu_dma_prepare_msi()

* tag 'iommu-fixes-v5.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
  iommu/dma: Relax locking in iommu_dma_prepare_msi()
  iommu/vt-d: Remove incorrect PSI capability check
  iommu/vt-d: Allocate reserved region for ISA with correct permission
  iommu: set group default domain before creating direct mappings
  iommu/vt-d: Fix dmar pte read access not set error
  iommu/vt-d: Set ISA bridge reserved region as relaxable
  iommu/dma: Rationalise types for DMA masks
  iommu/iova: Init the struct iova to fix the possible memleak
2019-12-20 10:42:25 -08:00
James Sewart
09298542cd PCI: Add nr_devfns parameter to pci_add_dma_alias()
Add a "nr_devfns" parameter to pci_add_dma_alias() so it can be used to
create DMA aliases for a range of devfns.

[bhelgaas: incorporate nr_devfns fix from James, update
quirk_pex_vca_alias() and setup_aliases()]
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2019-12-18 12:21:25 -06:00
Robin Murphy
c18647900e iommu/dma: Relax locking in iommu_dma_prepare_msi()
Since commit ece6e6f021 ("iommu/dma-iommu: Split iommu_dma_map_msi_msg()
in two parts"), iommu_dma_prepare_msi() should no longer have to worry
about preempting itself, nor being called in atomic context at all. Thus
we can downgrade the IRQ-safe locking to a simple mutex to avoid angering
the new might_sleep() check in iommu_map().

Reported-by: Qian Cai <cai@lca.pw>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-18 17:41:36 +01:00
Lu Baolu
f81b846dcd iommu/vt-d: Remove incorrect PSI capability check
The PSI (Page Selective Invalidation) bit in the capability register
is only valid for second-level translation. Intel IOMMU supporting
scalable mode must support page/address selective IOTLB invalidation
for first-level translation. Remove the PSI capability check in SVA
cache invalidation code.

Fixes: 8744daf4b0 ("iommu/vt-d: Remove global page flush support")
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-18 16:18:34 +01:00
Jerry Snitselaar
cde9319e88 iommu/vt-d: Allocate reserved region for ISA with correct permission
Currently the reserved region for ISA is allocated with no
permissions. If a dma domain is being used, mapping this region will
fail. Set the permissions to DMA_PTE_READ|DMA_PTE_WRITE.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: stable@vger.kernel.org # v5.3+
Fixes: d850c2ee5f ("iommu/vt-d: Expose ISA direct mapping region via iommu_get_resv_regions")
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Acked-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17 11:48:18 +01:00
Jerry Snitselaar
d360211524 iommu: set group default domain before creating direct mappings
iommu_group_create_direct_mappings uses group->default_domain, but
right after it is called, request_default_domain_for_dev calls
iommu_domain_free for the default domain, and sets the group default
domain to a different domain. Move the
iommu_group_create_direct_mappings call to after the group default
domain is set, so the direct mappings get associated with that domain.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: stable@vger.kernel.org
Fixes: 7423e01741 ("iommu: Add API to request DMA domain for device")
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17 11:45:17 +01:00
Lu Baolu
75d1838539 iommu/vt-d: Fix dmar pte read access not set error
If the default DMA domain of a group doesn't fit a device, it
will still sit in the group but use a private identity domain.
When map/unmap/iova_to_phys come through iommu API, the driver
should still serve them, otherwise, other devices in the same
group will be impacted. Since identity domain has been mapped
with the whole available memory space and RMRRs, we don't need
to worry about the impact on it.

Link: https://www.spinics.net/lists/iommu/msg40416.html
Cc: Jerry Snitselaar <jsnitsel@redhat.com>
Reported-by: Jerry Snitselaar <jsnitsel@redhat.com>
Fixes: 942067f1b6 ("iommu/vt-d: Identify default domains replaced with private")
Cc: stable@vger.kernel.org # v5.3+
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Tested-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17 11:37:25 +01:00
Alex Williamson
d8018a0e91 iommu/vt-d: Set ISA bridge reserved region as relaxable
Commit d850c2ee5f ("iommu/vt-d: Expose ISA direct mapping region via
iommu_get_resv_regions") created a direct-mapped reserved memory region
in order to replace the static identity mapping of the ISA address
space, where the latter was then removed in commit df4f3c603a
("iommu/vt-d: Remove static identity map code").  According to the
history of this code and the Kconfig option surrounding it, this direct
mapping exists for the benefit of legacy ISA drivers that are not
compatible with the DMA API.

In conjuntion with commit 9b77e5c798 ("vfio/type1: check dma map
request is within a valid iova range") this change introduced a
regression where the vfio IOMMU backend enforces reserved memory regions
per IOMMU group, preventing userspace from creating IOMMU mappings
conflicting with prescribed reserved regions.  A necessary prerequisite
for the vfio change was the introduction of "relaxable" direct mappings
introduced by commit adfd373820 ("iommu: Introduce
IOMMU_RESV_DIRECT_RELAXABLE reserved memory regions").  These relaxable
direct mappings provide the same identity mapping support in the default
domain, but also indicate that the reservation is software imposed and
may be relaxed under some conditions, such as device assignment.

Convert the ISA bridge direct-mapped reserved region to relaxable to
reflect that the restriction is self imposed and need not be enforced
by drivers such as vfio.

Fixes: 1c5c59fbad ("iommu/vt-d: Differentiate relaxable and non relaxable RMRRs")
Cc: stable@vger.kernel.org # v5.3+
Link: https://lore.kernel.org/linux-iommu/20191211082304.2d4fab45@x1.home
Reported-by: cprt <cprt@protonmail.com>
Tested-by: cprt <cprt@protonmail.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17 11:20:28 +01:00
Robin Murphy
bd036d2fdd iommu/dma: Rationalise types for DMA masks
Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus
limit, it makes more sense to pass them around in their native u64
rather than converting to dma_addr_t early. Do that, and resolve the
remaining type discrepancy against the domain geometry with a cheeky
cast to keep things simple.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Nathan Chancellor <natechancellor@gmail.com> # build
Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17 11:17:28 +01:00
Xiaotao Yin
472d26df5e iommu/iova: Init the struct iova to fix the possible memleak
During ethernet(Marvell octeontx2) set ring buffer test:
ethtool -G eth1 rx <rx ring size> tx <tx ring size>
following kmemleak will happen sometimes:

unreferenced object 0xffff000b85421340 (size 64):
  comm "ethtool", pid 867, jiffies 4295323539 (age 550.500s)
  hex dump (first 64 bytes):
    80 13 42 85 0b 00 ff ff ff ff ff ff ff ff ff ff  ..B.............
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000001b204ddf>] kmem_cache_alloc+0x1b0/0x350
    [<00000000d9ef2e50>] alloc_iova+0x3c/0x168
    [<00000000ea30f99d>] alloc_iova_fast+0x7c/0x2d8
    [<00000000b8bb2f1f>] iommu_dma_alloc_iova.isra.0+0x12c/0x138
    [<000000002f1a43b5>] __iommu_dma_map+0x8c/0xf8
    [<00000000ecde7899>] iommu_dma_map_page+0x98/0xf8
    [<0000000082004e59>] otx2_alloc_rbuf+0xf4/0x158
    [<000000002b107f6b>] otx2_rq_aura_pool_init+0x110/0x270
    [<00000000c3d563c7>] otx2_open+0x15c/0x734
    [<00000000a2f5f3a8>] otx2_dev_open+0x3c/0x68
    [<00000000456a98b5>] otx2_set_ringparam+0x1ac/0x1d4
    [<00000000f2fbb819>] dev_ethtool+0xb84/0x2028
    [<0000000069b67c5a>] dev_ioctl+0x248/0x3a0
    [<00000000af38663a>] sock_ioctl+0x280/0x638
    [<000000002582384c>] do_vfs_ioctl+0x8b0/0xa80
    [<000000004e1a2c02>] ksys_ioctl+0x84/0xb8

The reason:
When alloc_iova_mem() without initial with Zero, sometimes fpn_lo will
equal to IOVA_ANCHOR by chance, so when return with -ENOMEM(iova32_full)
from __alloc_and_insert_iova_range(), the new_iova will not be freed in
free_iova_mem().

Fixes: bb68b2fbfb ("iommu/iova: Add rbtree anchor node")
Signed-off-by: Xiaotao Yin <xiaotao.yin@windriver.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17 11:13:20 +01:00
Eric Auger
4c80ba392b iommu: fix KASAN use-after-free in iommu_insert_resv_region
In case the new region gets merged into another one, the nr list node is
freed.  Checking its type while completing the merge algorithm leads to
a use-after-free.  Use new->type instead.

Fixes: 4dbd258ff6 ("iommu: Revisit iommu_insert_resv_region() implementation")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Qian Cai <cai@lca.pw>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Cc: Stable <stable@vger.kernel.org> #v5.3+
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-16 08:58:42 -08:00
Linus Torvalds
c3bed3b20e pci-v5.5-changes
-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAl3leXUUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vyY3g/9FAVVdPEaadNtAhQ/zIxcjozDovKq
 0q7yOA3aTBTUoNEinm88an6p0dcC4gNKtGukXmzVH2Hhxm9kLRdtpZGYY00tpLUB
 9rI7XsgwwHa+hLwsHbIs507sKGFGy5FLr0ChTTGLDEMppnEvjA2hZooYmcB/OgrC
 LlFcwbNKGOk/Si9u2bF2nLO0JDoVHnwzpF99saew/nqc7Lfj9e9IPZFom+VjPBUh
 AOvRp2H7uBN+WQlpLeFeMDDoeXh34lX0kYqIV/cVkXVnknDGYKV2CBTg2aeX7jd0
 QiPHZh6zlW8zNQgaCZRiBAbatVEOnRMRJ++yiqB8hBYp1LMXm6kJ01YSQpXkugoY
 Vp9dtzzTARWV/XkKwD4brw9ZEmIDnO+Ed2x2VbUkPJVcXAvzSQWAx82IU0Iuqmcb
 9qr6U2Zf/Xk5aFlGPYVH8QOG+QqzIbZNRQ7NlhDlITyW4P6QPu0mw374yYP2wDGL
 sP5YSS3YGa0sQcEgDtVnd4z+WTZI4AwXLPaeaLkDhdfHp2FsERUY4TrPs33J99xw
 og4EyokVFzjYzlnBPU6WWn7LL+jj5ccXkL3MA4DR4FJOnNGHh7NXfQUH56rrgsq7
 F9/8shL5DuTbQkde1uSyUG9Iq/RigVLlV5DQavFm3dSXvZi0E16t5alC5URNTzk7
 at8Bogn53QhlmYc=
 =uUXw
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:

   - Warn if a host bridge has no NUMA info (Yunsheng Lin)

   - Add PCI_STD_NUM_BARS for the number of standard BARs (Denis
     Efremov)

  Resource management:

   - Fix boot-time Embedded Controller GPE storm caused by incorrect
     resource assignment after ACPI Bus Check Notification (Mika
     Westerberg)

   - Protect pci_reassign_bridge_resources() against concurrent
     addition/removal (Benjamin Herrenschmidt)

   - Fix bridge dma_ranges resource list cleanup (Rob Herring)

   - Add "pci=hpmmiosize" and "pci=hpmmioprefsize" parameters to control
     the MMIO and prefetchable MMIO window sizes of hotplug bridges
     independently (Nicholas Johnson)

   - Fix MMIO/MMIO_PREF window assignment that assigned more space than
     desired (Nicholas Johnson)

   - Only enforce bus numbers from bridge EA if the bridge has EA
     devices downstream (Subbaraya Sundeep)

   - Consolidate DT "dma-ranges" parsing and convert all host drivers to
     use shared parsing (Rob Herring)

  Error reporting:

   - Restore AER capability after resume (Mayurkumar Patel)

   - Add PoisonTLPBlocked AER counter (Rajat Jain)

   - Use for_each_set_bit() to simplify AER code (Andy Shevchenko)

   - Fix AER kernel-doc (Andy Shevchenko)

   - Add "pcie_ports=dpc-native" parameter to allow native use of DPC
     even if platform didn't grant control over AER (Olof Johansson)

  Hotplug:

   - Avoid returning prematurely from sysfs requests to enable or
     disable a PCIe hotplug slot (Lukas Wunner)

   - Don't disable interrupts twice when suspending hotplug ports (Mika
     Westerberg)

   - Fix deadlocks when PCIe ports are hot-removed while suspended (Mika
     Westerberg)

  Power management:

   - Remove unnecessary ASPM locking (Bjorn Helgaas)

   - Add support for disabling L1 PM Substates (Heiner Kallweit)

   - Allow re-enabling Clock PM after it has been disabled (Heiner
     Kallweit)

   - Add sysfs attributes for controlling ASPM link states (Heiner
     Kallweit)

   - Remove CONFIG_PCIEASPM_DEBUG, including "link_state" and "clk_ctl"
     sysfs files (Heiner Kallweit)

   - Avoid AMD FCH XHCI USB PME# from D0 defect that prevents wakeup on
     USB 2.0 or 1.1 connect events (Kai-Heng Feng)

   - Move power state check out of pci_msi_supported() (Bjorn Helgaas)

   - Fix incorrect MSI-X masking on resume and revert related nvme quirk
     for Kingston NVME SSD running FW E8FK11.T (Jian-Hong Pan)

   - Always return devices to D0 when thawing to fix hibernation with
     drivers like mlx4 that used legacy power management (previously we
     only did it for drivers with new power management ops) (Dexuan Cui)

   - Clear PCIe PME Status even for legacy power management (Bjorn
     Helgaas)

   - Fix PCI PM documentation errors (Bjorn Helgaas)

   - Use dev_printk() for more power management messages (Bjorn Helgaas)

   - Apply D2 delay as milliseconds, not microseconds (Bjorn Helgaas)

   - Convert xen-platform from legacy to generic power management (Bjorn
     Helgaas)

   - Removed unused .resume_early() and .suspend_late() legacy power
     management hooks (Bjorn Helgaas)

   - Rearrange power management code for clarity (Rafael J. Wysocki)

   - Decode power states more clearly ("4" or "D4" really refers to
     "D3cold") (Bjorn Helgaas)

   - Notice when reading PM Control register returns an error (~0)
     instead of interpreting it as being in D3hot (Bjorn Helgaas)

   - Add missing link delays required by the PCIe spec (Mika Westerberg)

  Virtualization:

   - Move pci_prg_resp_pasid_required() to CONFIG_PCI_PRI (Bjorn
     Helgaas)

   - Allow VFs to use PRI (the PF PRI is shared by the VFs, but the code
     previously didn't recognize that) (Kuppuswamy Sathyanarayanan)

   - Allow VFs to use PASID (the PF PASID capability is shared by the
     VFs, but the code previously didn't recognize that) (Kuppuswamy
     Sathyanarayanan)

   - Disconnect PF and VF ATS enablement, since ATS in PFs and
     associated VFs can be enabled independently (Kuppuswamy
     Sathyanarayanan)

   - Cache PRI and PASID capability offsets (Kuppuswamy Sathyanarayanan)

   - Cache the PRI PRG Response PASID Required bit (Bjorn Helgaas)

   - Consolidate ATS declarations in linux/pci-ats.h (Krzysztof
     Wilczynski)

   - Remove unused PRI and PASID stubs (Bjorn Helgaas)

   - Removed unnecessary EXPORT_SYMBOL_GPL() from ATS, PRI, and PASID
     interfaces that are only used by built-in IOMMU drivers (Bjorn
     Helgaas)

   - Hide PRI and PASID state restoration functions used only inside the
     PCI core (Bjorn Helgaas)

   - Add a DMA alias quirk for the Intel VCA NTB (Slawomir Pawlowski)

   - Serialize sysfs sriov_numvfs reads vs writes (Pierre Crégut)

   - Update Cavium ACS quirk for ThunderX2 and ThunderX3 (George
     Cherian)

   - Fix the UPDCR register address in the Intel ACS quirk (Steffen
     Liebergeld)

   - Unify ACS quirk implementations (Bjorn Helgaas)

  Amlogic Meson host bridge driver:

   - Fix meson PERST# GPIO polarity problem (Remi Pommarel)

   - Add DT bindings for Amlogic Meson G12A (Neil Armstrong)

   - Fix meson clock names to match DT bindings (Neil Armstrong)

   - Add meson support for Amlogic G12A SoC with separate shared PHY
     (Neil Armstrong)

   - Add meson extended PCIe PHY functions for Amlogic G12A USB3+PCIe
     combo PHY (Neil Armstrong)

   - Add arm64 DT for Amlogic G12A PCIe controller node (Neil Armstrong)

   - Add commented-out description of VIM3 USB3/PCIe mux in arm64 DT
     (Neil Armstrong)

  Broadcom iProc host bridge driver:

   - Invalidate iProc PAXB address mapping before programming it
     (Abhishek Shah)

   - Fix iproc-msi and mvebu __iomem annotations (Ben Dooks)

  Cadence host bridge driver:

   - Refactor Cadence PCIe host controller to use as a library for both
     host and endpoint (Tom Joseph)

  Freescale Layerscape host bridge driver:

   - Add layerscape LS1028a support (Xiaowei Bao)

  Intel VMD host bridge driver:

   - Add VMD bus 224-255 restriction decode (Jon Derrick)

   - Add VMD 8086:9A0B device ID (Jon Derrick)

   - Remove Keith from VMD maintainer list (Keith Busch)

  Marvell ARMADA 3700 / Aardvark host bridge driver:

   - Use LTSSM state to build link training flag since Aardvark doesn't
     implement the Link Training bit (Remi Pommarel)

   - Delay before training Aardvark link in case PERST# was asserted
     before the driver probe (Remi Pommarel)

   - Fix Aardvark issues with Root Control reads and writes (Remi
     Pommarel)

   - Don't rely on jiffies in Aardvark config access path since
     interrupts may be disabled (Remi Pommarel)

   - Fix Aardvark big-endian support (Grzegorz Jaszczyk)

  Marvell ARMADA 370 / XP host bridge driver:

   - Make mvebu_pci_bridge_emul_ops static (Ben Dooks)

  Microsoft Hyper-V host bridge driver:

   - Add hibernation support for Hyper-V virtual PCI devices (Dexuan
     Cui)

   - Track Hyper-V pci_protocol_version per-hbus, not globally (Dexuan
     Cui)

   - Avoid kmemleak false positive on hv hbus buffer (Dexuan Cui)

  Mobiveil host bridge driver:

   - Change mobiveil csr_read()/write() function names that conflict
     with riscv arch functions (Kefeng Wang)

  NVIDIA Tegra host bridge driver:

   - Fix Tegra CLKREQ dependency programming (Vidya Sagar)

  Renesas R-Car host bridge driver:

   - Remove unnecessary header include from rcar (Andrew Murray)

   - Tighten register index checking for rcar inbound range programming
     (Marek Vasut)

   - Fix rcar inbound range alignment calculation to improve packing of
     multiple entries (Marek Vasut)

   - Update rcar MACCTLR setting to match documentation (Yoshihiro
     Shimoda)

   - Clear bit 0 of MACCTLR before PCIETCTLR.CFINIT per manual
     (Yoshihiro Shimoda)

   - Add Marek Vasut and Yoshihiro Shimoda as R-Car maintainers (Simon
     Horman)

  Rockchip host bridge driver:

   - Make rockchip 0V9 and 1V8 power regulators non-optional (Robin
     Murphy)

  Socionext UniPhier host bridge driver:

   - Set uniphier to host (RC) mode always (Kunihiko Hayashi)

  Endpoint drivers:

   - Fix endpoint driver sign extension problem when shifting page
     number to phys_addr_t (Alan Mikhak)

  Misc:

   - Add NumaChip SPDX header (Krzysztof Wilczynski)

   - Replace EXTRA_CFLAGS with ccflags-y (Krzysztof Wilczynski)

   - Remove unused includes (Krzysztof Wilczynski)

   - Removed unused sysfs attribute groups (Ben Dooks)

   - Remove PTM and ASPM dependencies on PCIEPORTBUS (Bjorn Helgaas)

   - Add PCIe Link Control 2 register field definitions to replace magic
     numbers in AMDGPU and Radeon CIK/SI (Bjorn Helgaas)

   - Fix incorrect Link Control 2 Transmit Margin usage in AMDGPU and
     Radeon CIK/SI PCIe Gen3 link training (Bjorn Helgaas)

   - Use pcie_capability_read_word() instead of pci_read_config_word()
     in AMDGPU and Radeon CIK/SI (Frederick Lawler)

   - Remove unused pci_irq_get_node() Greg Kroah-Hartman)

   - Make asm/msi.h mandatory and simplify PCI_MSI_IRQ_DOMAIN Kconfig
     (Palmer Dabbelt, Michal Simek)

   - Read all 64 bits of Switchtec part_event_bitmap (Logan Gunthorpe)

   - Fix erroneous intel-iommu dependency on CONFIG_AMD_IOMMU (Bjorn
     Helgaas)

   - Fix bridge emulation big-endian support (Grzegorz Jaszczyk)

   - Fix dwc find_next_bit() usage (Niklas Cassel)

   - Fix pcitest.c fd leak (Hewenliang)

   - Fix typos and comments (Bjorn Helgaas)

   - Fix Kconfig whitespace errors (Krzysztof Kozlowski)"

* tag 'pci-v5.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (160 commits)
  PCI: Remove PCI_MSI_IRQ_DOMAIN architecture whitelist
  asm-generic: Make msi.h a mandatory include/asm header
  Revert "nvme: Add quirk for Kingston NVME SSD running FW E8FK11.T"
  PCI/MSI: Fix incorrect MSI-X masking on resume
  PCI/MSI: Move power state check out of pci_msi_supported()
  PCI/MSI: Remove unused pci_irq_get_node()
  PCI: hv: Avoid a kmemleak false positive caused by the hbus buffer
  PCI: hv: Change pci_protocol_version to per-hbus
  PCI: hv: Add hibernation support
  PCI: hv: Reorganize the code in preparation of hibernation
  MAINTAINERS: Remove Keith from VMD maintainer
  PCI/ASPM: Remove PCIEASPM_DEBUG Kconfig option and related code
  PCI/ASPM: Add sysfs attributes for controlling ASPM link states
  PCI: Fix indentation
  drm/radeon: Prefer pcie_capability_read_word()
  drm/radeon: Replace numbers with PCI_EXP_LNKCTL2 definitions
  drm/radeon: Correct Transmit Margin masks
  drm/amdgpu: Prefer pcie_capability_read_word()
  PCI: uniphier: Set mode register to host mode
  drm/amdgpu: Replace numbers with PCI_EXP_LNKCTL2 definitions
  ...
2019-12-03 13:58:22 -08:00
Linus Torvalds
1daa56bcfd IOMMU Updates for Linux v5.5
Including:
 
 	- Conversion of the AMD IOMMU driver to use the dma-iommu code
 	  for imlementing the DMA-API. This gets rid of quite some code
 	  in the driver itself, but also has some potential for
 	  regressions (non are known at the moment).
 
 	- Support for the Qualcomm SMMUv2 implementation in the SDM845
 	  SoC.  This also includes some firmware interface changes, but
 	  those are acked by the respective maintainers.
 
 	- Preparatory work to support two distinct page-tables per
 	  domain in the ARM-SMMU driver
 
 	- Power management improvements for the ARM SMMUv2
 
 	- Custom PASID allocator support
 
 	- Multiple PCI DMA alias support for the AMD IOMMU driver
 
 	- Adaption of the Mediatek driver to the changed IO/TLB flush
 	  interface of the IOMMU core code.
 
 	- Preparatory patches for the Renesas IOMMU driver to support
 	  future hardware.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAl3hCC4ACgkQK/BELZcB
 GuN9QQ/+LFh2TdhtiemIakA/19nv1FTP719uje7vjX4gGBGD++NEzW7mBcAXSEnD
 rBta1GsD6N8h0fdT53Nw8cezQ1ldBomKG3j+mzcju7TcuRwebhCEQaxh2iWy+I6g
 cp6HxTu3G0E6Zy7wd+MWyJzvXa7MXV2p8iCDs7Dp8yEow+c55b4LAIoeRWx3rjsT
 rat29MuJ8TGLP6vOYHcpI+REGfda4rsog75980RIoOEuqRjMG6JPj9clPeakSNtQ
 Rl1EtgrDskbRCgDSujbzDMHAYRUKvdCuTuTM1De/GQO+GWYsOtzqBHkct67sGn9I
 H518Be9m4xfYyyktVM6K9bSpxzCOtor+u6LFOejufJN/7vL2qtePZX7EHL/ks8zh
 Mn80H/1ch1UcFcF9p7V7QCMUSyaiX/VWhgwWIdPf3CGrKVaLnQ8mkB82Zf0VNuQT
 OzcfJcVF+skhDkXdFL5xUkQtqqTHhpaK2CzvvTDAsR1KXMCc6mH/MT/9m+mOFQFK
 P+klgGdU5rVniru10k4pamT5LlLubRV0NBpaAiGr2R3dfyYyiS/D9FBSLanqO+JM
 AgSnmOSbl7y927DxufkVPH8M7TxSdtQVo7VoQFjSWE8B9bh4qU6MVV30enLvY0Fj
 g4DP+8srOvY0vNsWNiBe2JpldABGEAbumFt78g1WV2tFi1d/NUI=
 =ntaE
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:

 - Conversion of the AMD IOMMU driver to use the dma-iommu code for
   imlementing the DMA-API. This gets rid of quite some code in the
   driver itself, but also has some potential for regressions (non are
   known at the moment).

 - Support for the Qualcomm SMMUv2 implementation in the SDM845 SoC.
   This also includes some firmware interface changes, but those are
   acked by the respective maintainers.

 - Preparatory work to support two distinct page-tables per domain in
   the ARM-SMMU driver

 - Power management improvements for the ARM SMMUv2

 - Custom PASID allocator support

 - Multiple PCI DMA alias support for the AMD IOMMU driver

 - Adaption of the Mediatek driver to the changed IO/TLB flush interface
   of the IOMMU core code.

 - Preparatory patches for the Renesas IOMMU driver to support future
   hardware.

* tag 'iommu-updates-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (62 commits)
  iommu/rockchip: Don't provoke WARN for harmless IRQs
  iommu/vt-d: Turn off translations at shutdown
  iommu/vt-d: Check VT-d RMRR region in BIOS is reported as reserved
  iommu/arm-smmu: Remove duplicate error message
  iommu/arm-smmu-v3: Don't display an error when IRQ lines are missing
  iommu/ipmmu-vmsa: Add utlb_offset_base
  iommu/ipmmu-vmsa: Add helper functions for "uTLB" registers
  iommu/ipmmu-vmsa: Calculate context registers' offset instead of a macro
  iommu/ipmmu-vmsa: Add helper functions for MMU "context" registers
  iommu/ipmmu-vmsa: tidyup register definitions
  iommu/ipmmu-vmsa: Remove all unused register definitions
  iommu/mediatek: Reduce the tlb flush timeout value
  iommu/mediatek: Get rid of the pgtlock
  iommu/mediatek: Move the tlb_sync into tlb_flush
  iommu/mediatek: Delete the leaf in the tlb_flush
  iommu/mediatek: Use gather to achieve the tlb range flush
  iommu/mediatek: Add a new tlb_lock for tlb_flush
  iommu/mediatek: Correct the flush_iotlb_all callback
  iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve doc
  iommu/io-pgtable-arm: Rationalise MAIR handling
  ...
2019-12-02 11:05:00 -08:00
Linus Torvalds
0dd0c8f7db - Support for new VMBus protocols (Andrea Parri).
- Hibernation support (Dexuan Cui).
 - Latency testing framework (Branden Bonaby).
 - Decoupling Hyper-V page size from guest page size (Himadri Pandya).
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE4n5dijQDou9mhzu83qZv95d3LNwFAl3f5YIACgkQ3qZv95d3
 LNzBww/8Cpv/BnOs2cp56OhC+2++3YlWfmxGnvQb9h52weElgr1AZF33lAynp8BZ
 YssOcDnS/G2iAkNDffbQA7s3WTwIjP1weJibOeKbtcXp4SuhNR3gnJafufNddNDv
 bw8ZReLQV7hy3sHb3OUx0aJk5Mssp0N9ZpxRilyIpLELPfVp63gFebq6s1MQYljk
 BAiNO4SKqsGQGZApt2F4Cc3hX2wU2ZfiDm6SifXiLYITGnvilIn7XFIht+2jJBWS
 CdzRoGXcwhQhlj68XWlc89SOzJb7vVUMO1sr84psfbQ2LbhJU8lfJKRJ4b4lR07Z
 Uv5FYxjr14S65fv7DkzCfWU+uPN/sObG4pPXihlfqcTraOvYLQ6/x8cw+9tGZg4H
 aTtnF40hnO81aKsvPAeIsSzVkoyPaSrt7KKhk+Bw/5EUDTTNp6EbIuL4xwnKt6Rt
 2UpA5HM9guQqNb6OZrjlpZfJgd9bNP4CZLBTfOukmnZpONKr2Wv3wubcwQJ8ibQc
 1WZ5SfN2Wmg999Ski7j9qzHk0tWJxa6SX+2NLEHRKxy2nJSJ1zlAr//bznMyMgH/
 yKPDaSkOFoy0aqiTKV2WzuOY6FGXTrSo5vq8YAgYRgp3xB+5+7zLeqlj3ipXhLYE
 HH/eqB27eSnvi0jpub4TbszGJG0o4Z1aYx3aHYYqrOfWX/A5Vls=
 =oJGE
 -----END PGP SIGNATURE-----

Merge tag 'hyperv-next-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux

Pull Hyper-V updates from Sasha Levin:

 - support for new VMBus protocols (Andrea Parri)

 - hibernation support (Dexuan Cui)

 - latency testing framework (Branden Bonaby)

 - decoupling Hyper-V page size from guest page size (Himadri Pandya)

* tag 'hyperv-next-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux: (22 commits)
  Drivers: hv: vmbus: Fix crash handler reset of Hyper-V synic
  drivers/hv: Replace binary semaphore with mutex
  drivers: iommu: hyperv: Make HYPERV_IOMMU only available on x86
  HID: hyperv: Add the support of hibernation
  hv_balloon: Add the support of hibernation
  x86/hyperv: Implement hv_is_hibernation_supported()
  Drivers: hv: balloon: Remove dependencies on guest page size
  Drivers: hv: vmbus: Remove dependencies on guest page size
  x86: hv: Add function to allocate zeroed page for Hyper-V
  Drivers: hv: util: Specify ring buffer size using Hyper-V page size
  Drivers: hv: Specify receive buffer size using Hyper-V page size
  tools: hv: add vmbus testing tool
  drivers: hv: vmbus: Introduce latency testing
  video: hyperv: hyperv_fb: Support deferred IO for Hyper-V frame buffer driver
  video: hyperv: hyperv_fb: Obtain screen resolution from Hyper-V host
  hv_netvsc: Add the support of hibernation
  hv_sock: Add the support of hibernation
  video: hyperv_fb: Add the support of hibernation
  scsi: storvsc: Add the support of hibernation
  Drivers: hv: vmbus: Add module parameter to cap the VMBus version
  ...
2019-11-30 14:50:51 -08:00
Linus Torvalds
81b6b96475 dma-mapping updates for 5.5-rc1
- improve dma-debug scalability (Eric Dumazet)
  - tiny dma-debug cleanup (Dan Carpenter)
  - check for vmap memory in dma_map_single (Kees Cook)
  - check for dma_addr_t overflows in dma-direct when using
    DMA offsets (Nicolas Saenz Julienne)
  - switch the x86 sta2x11 SOC to use more generic DMA code
    (Nicolas Saenz Julienne)
  - fix arm-nommu dma-ranges handling (Vladimir Murzin)
  - use __initdata in CMA (Shyam Saini)
  - replace the bus dma mask with a limit (Nicolas Saenz Julienne)
  - merge the remapping helpers into the main dma-direct flow (me)
  - switch xtensa to the generic dma remap handling (me)
  - various cleanups around dma_capable (me)
  - remove unused dev arguments to various dma-noncoherent helpers (me)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl3f+eULHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYPyPg/+PVHCrhmepudQQFHu6wfurE5U77iNnoUifvG+b5z5
 5mHmTMkQwyox6rKDe8NuFApAhz1VJDSUgSelPmvTSOIEIGXCvX1p+GqRSVS5YQON
 aLzGvbWKE8hCpaPdDHKYDauD1FZGMM8L2P5oOMF9X9fQ94xxRqfqJM6c8iD16Sgg
 +aOgPNzTnxQHJFF/Dbt/mjJrKXWI+XF+bgUbH+l9yKa7Dd7ibmJR8yl9hs1jmp0H
 1CZ+CizwnAs57rCd1a6Ybc6gj59tySc03NMnnbTko+KDxrcbD3Ee2tpqHVkkCjYz
 Yl0m4FIpbotrpokL/FIS727bVvkjbWgoeM+kiVPoYzmZea3pq/tFDr6tp/BxDhFj
 TZXSFfgQljlYMD3ppSoklFlfjGriVWV0tPO3arPXwuuMF5EX/IMQmvxei05jpc8n
 iELNXOP9iZZkY4tLHy2hn2uWrxBRrS1WQwlLg9hahlNRzyfFSyHeP0zWlVDt+RgF
 5CCbEI+HQcUqg1FApB30lQNWTn1+dJftrpKVBlgNBIyIa/z2rFbt8GdSnItxjfQX
 /XX8EZbFvF6AcXkgURkYFIoKM/EbYShOSLcYA3PTUtcuTnF6Kk5eimySiGWZTVCS
 prruSFDZJOvL3SnOIMIiYVmBdB7lEbDyLI/VYuhoECXEDCJpVmRktNkJNg4q6/E+
 fjQ=
 =e5wO
 -----END PGP SIGNATURE-----

Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux; tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping updates from Christoph Hellwig:

 - improve dma-debug scalability (Eric Dumazet)

 - tiny dma-debug cleanup (Dan Carpenter)

 - check for vmap memory in dma_map_single (Kees Cook)

 - check for dma_addr_t overflows in dma-direct when using DMA offsets
   (Nicolas Saenz Julienne)

 - switch the x86 sta2x11 SOC to use more generic DMA code (Nicolas
   Saenz Julienne)

 - fix arm-nommu dma-ranges handling (Vladimir Murzin)

 - use __initdata in CMA (Shyam Saini)

 - replace the bus dma mask with a limit (Nicolas Saenz Julienne)

 - merge the remapping helpers into the main dma-direct flow (me)

 - switch xtensa to the generic dma remap handling (me)

 - various cleanups around dma_capable (me)

 - remove unused dev arguments to various dma-noncoherent helpers (me)

* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux:

* tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping: (22 commits)
  dma-mapping: treat dev->bus_dma_mask as a DMA limit
  dma-direct: exclude dma_direct_map_resource from the min_low_pfn check
  dma-direct: don't check swiotlb=force in dma_direct_map_resource
  dma-debug: clean up put_hash_bucket()
  powerpc: remove support for NULL dev in __phys_to_dma / __dma_to_phys
  dma-direct: avoid a forward declaration for phys_to_dma
  dma-direct: unify the dma_capable definitions
  dma-mapping: drop the dev argument to arch_sync_dma_for_*
  x86/PCI: sta2x11: use default DMA address translation
  dma-direct: check for overflows on 32 bit DMA addresses
  dma-debug: increase HASH_SIZE
  dma-debug: reorder struct dma_debug_entry fields
  xtensa: use the generic uncached segment support
  dma-mapping: merge the generic remapping helpers into dma-direct
  dma-direct: provide mmap and get_sgtable method overrides
  dma-direct: remove the dma_handle argument to __dma_direct_alloc_pages
  dma-direct: remove __dma_direct_free_pages
  usb: core: Remove redundant vmap checks
  kernel: dma-contiguous: mark CMA parameters __initdata/__initconst
  dma-debug: add a schedule point in debug_dma_dump_mappings()
  ...
2019-11-28 11:16:43 -08:00
Bjorn Helgaas
f52412b151 Merge branch 'pci/virtualization'
- Fix erroneous intel-iommu dependency on CONFIG_AMD_IOMMU (Bjorn
    Helgaas)

  - Move pci_prg_resp_pasid_required() to CONFIG_PCI_PRI (Bjorn Helgaas)

  - Allow VFs to use PRI (the PF PRI is shared by the VFs, but the code
    previously didn't recognize that) (Kuppuswamy Sathyanarayanan)

  - Allow VFs to use PASID (the PF PASID capability is shared by the VFs,
    but the code previously didn't recognize that) (Kuppuswamy
    Sathyanarayanan)

  - Disconnect PF and VF ATS enablement, since ATS in PFs and associated
    VFs can be enabled independently (Kuppuswamy Sathyanarayanan)

  - Cache PRI and PASID capability offsets (Kuppuswamy Sathyanarayanan)

  - Cache the PRI PRG Response PASID Required bit (Bjorn Helgaas)

  - Consolidate ATS declarations in linux/pci-ats.h (Krzysztof Wilczynski)

  - Remove unused PRI and PASID stubs (Bjorn Helgaas)

  - Removed unnecessary EXPORT_SYMBOL_GPL() from ATS, PRI, and PASID
    interfaces that are only used by built-in IOMMU drivers (Bjorn Helgaas)

  - Hide PRI and PASID state restoration functions used only inside the PCI
    core (Bjorn Helgaas)

  - Fix the UPDCR register address in the Intel ACS quirk (Steffen
    Liebergeld)

  - Add a DMA alias quirk for the Intel VCA NTB (Slawomir Pawlowski)

  - Serialize sysfs sriov_numvfs reads vs writes (Pierre Crégut)

  - Update Cavium ACS quirk for ThunderX2 and ThunderX3 (George Cherian)

  - Unify ACS quirk implementations (Bjorn Helgaas)

* pci/virtualization:
  PCI: Unify ACS quirk desired vs provided checking
  PCI: Make ACS quirk implementations more uniform
  PCI: Apply Cavium ACS quirk to ThunderX2 and ThunderX3
  PCI/IOV: Serialize sysfs sriov_numvfs reads vs writes
  PCI: Add DMA alias quirk for Intel VCA NTB
  PCI: Fix Intel ACS quirk UPDCR register address
  PCI/ATS: Make pci_restore_pri_state(), pci_restore_pasid_state() private
  PCI/ATS: Remove unnecessary EXPORT_SYMBOL_GPL()
  PCI/ATS: Remove unused PRI and PASID stubs
  PCI/ATS: Consolidate ATS declarations in linux/pci-ats.h
  PCI/ATS: Cache PRI PRG Response PASID Required bit
  PCI/ATS: Cache PASID Capability offset
  PCI/ATS: Cache PRI Capability offset
  PCI/ATS: Disable PF/VF ATS service independently
  PCI/ATS: Handle sharing of PF PASID Capability with all VFs
  PCI/ATS: Handle sharing of PF PRI Capability with all VFs
  PCI/ATS: Move pci_prg_resp_pasid_required() to CONFIG_PCI_PRI
  iommu/vt-d: Select PCI_PRI for INTEL_IOMMU_SVM
2019-11-28 08:54:38 -06:00
Bjorn Helgaas
e87eb585d3 Merge branch 'pci/misc'
- Add NumaChip SPDX header (Krzysztof Wilczynski)

  - Replace EXTRA_CFLAGS with ccflags-y (Krzysztof Wilczynski)

  - Remove unused includes (Krzysztof Wilczynski)

  - Avoid AMD FCH XHCI USB PME# from D0 defect that prevents wakeup on USB
    2.0 or 1.1 connect events (Kai-Heng Feng)

  - Removed unused sysfs attribute groups (Ben Dooks)

  - Remove PTM and ASPM dependencies on PCIEPORTBUS (Bjorn Helgaas)

  - Add PCIe Link Control 2 register field definitions to replace magic
    numbers in AMDGPU and Radeon CIK/SI (Bjorn Helgaas)

  - Fix incorrect Link Control 2 Transmit Margin usage in AMDGPU and Radeon
    CIK/SI PCIe Gen3 link training (Bjorn Helgaas)

  - Use pcie_capability_read_word() instead of pci_read_config_word() in
    AMDGPU and Radeon CIK/SI (Frederick Lawler)

* pci/misc:
  drm/radeon: Prefer pcie_capability_read_word()
  drm/radeon: Replace numbers with PCI_EXP_LNKCTL2 definitions
  drm/radeon: Correct Transmit Margin masks
  drm/amdgpu: Prefer pcie_capability_read_word()
  drm/amdgpu: Replace numbers with PCI_EXP_LNKCTL2 definitions
  drm/amdgpu: Correct Transmit Margin masks
  PCI: Add #defines for Enter Compliance, Transmit Margin
  PCI: Allow building PCIe things without PCIEPORTBUS
  PCI: Remove PCIe Kconfig dependencies on PCI
  PCI/ASPM: Remove dependency on PCIEPORTBUS
  PCI/PTM: Remove dependency on PCIEPORTBUS
  PCI/PTM: Remove spurious "d" from granularity message
  PCI: sysfs: Remove unused attribute groups
  x86/PCI: Avoid AMD FCH XHCI USB PME# from D0 defect
  PCI: Remove unused includes and superfluous struct declaration
  x86/PCI: Replace deprecated EXTRA_CFLAGS with ccflags-y
  x86/PCI: Correct SPDX comment style
  x86/PCI: Add NumaChip SPDX GPL-2.0 to replace COPYING boilerplate
2019-11-28 08:54:32 -06:00
Rafael J. Wysocki
995e2ef082 Merge branches 'acpi-utils', 'acpi-platform', 'acpi-video' and 'acpi-doc'
* acpi-utils:
  iommu/amd: Switch to use acpi_dev_hid_uid_match()
  mmc: sdhci-acpi: Switch to use acpi_dev_hid_uid_match()
  ACPI / LPSS: Switch to use acpi_dev_hid_uid_match()
  ACPI / utils: Introduce acpi_dev_hid_uid_match() helper
  ACPI / utils: Move acpi_dev_get_first_match_dev() under CONFIG_ACPI
  ACPI / utils: Describe function parameters in kernel-doc

* acpi-platform:
  ACPI: platform: Unregister stale platform devices
  ACPI: Always build evged in

* acpi-video:
  ACPI: video: update doc for acpi_video_bus_DOS()

* acpi-doc:
  ACPI: Documentation: Minor spelling fix in namespace.rst
2019-11-26 10:30:49 +01:00
Boqun Feng
d7f0b2e450 drivers: iommu: hyperv: Make HYPERV_IOMMU only available on x86
Currently hyperv-iommu is implemented in a x86 specific way, for
example, apic is used. So make the HYPERV_IOMMU Kconfig depend on X86
as a preparation for enabling HyperV on architecture other than x86.

Cc: Lan Tianyu <Tianyu.Lan@microsoft.com>
Cc: Michael Kelley <mikelley@microsoft.com>
Cc: linux-hyperv@vger.kernel.org
Signed-off-by: Boqun Feng (Microsoft) <boqun.feng@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-21 20:10:45 -05:00
Nicolas Saenz Julienne
a7ba70f178 dma-mapping: treat dev->bus_dma_mask as a DMA limit
Using a mask to represent bus DMA constraints has a set of limitations.
The biggest one being it can only hold a power of two (minus one). The
DMA mapping code is already aware of this and treats dev->bus_dma_mask
as a limit. This quirk is already used by some architectures although
still rare.

With the introduction of the Raspberry Pi 4 we've found a new contender
for the use of bus DMA limits, as its PCIe bus can only address the
lower 3GB of memory (of a total of 4GB). This is impossible to represent
with a mask. To make things worse the device-tree code rounds non power
of two bus DMA limits to the next power of two, which is unacceptable in
this case.

In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all
over the tree and treat it as such. Note that dev->bus_dma_limit should
contain the higher accessible DMA address.

Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-11-21 18:14:35 +01:00
Krzysztof Wilczynski
bbd8810d39 PCI: Remove unused includes and superfluous struct declaration
Remove <linux/pci.h> and <linux/msi.h> from being included directly as part
of the include/linux/of_pci.h, and remove superfluous declaration of struct
of_phandle_args.

Move users of include <linux/of_pci.h> to include <linux/pci.h> and
<linux/msi.h> directly rather than rely on both being included transitively
through <linux/of_pci.h>.

Link: https://lore.kernel.org/r/20190903113059.2901-1-kw@linux.com
Signed-off-by: Krzysztof Wilczynski <kw@linux.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rob Herring <robh@kernel.org>
2019-11-21 07:49:29 -06:00
Christoph Hellwig
56e35f9c5b dma-mapping: drop the dev argument to arch_sync_dma_for_*
These are pure cache maintainance routines, so drop the unused
struct device argument.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2019-11-20 20:31:38 +01:00
Joerg Roedel
9b3a713fee Merge branches 'iommu/fixes', 'arm/qcom', 'arm/renesas', 'arm/rockchip', 'arm/mediatek', 'arm/tegra', 'arm/smmu', 'x86/amd', 'x86/vt-d', 'virtio' and 'core' into next 2019-11-12 17:11:25 +01:00
Robin Murphy
5b47748ecf iommu/rockchip: Don't provoke WARN for harmless IRQs
Although we don't generally expect IRQs to fire for a suspended IOMMU,
there are certain situations (particularly with debug options) where
we might legitimately end up with the pm_runtime_get_if_in_use() call
from rk_iommu_irq() returning 0. Since this doesn't represent an actual
error, follow the other parts of the driver and save the WARN_ON()
condition for a genuine negative value. Even if we do have spurious
IRQs due to a wedged VOP asserting the shared line, it's not this
driver's job to try to second-guess the IRQ core to warn about that.

Reported-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-12 17:07:47 +01:00
Deepa Dinamani
6c3a44ed3c iommu/vt-d: Turn off translations at shutdown
The intel-iommu driver assumes that the iommu state is
cleaned up at the start of the new kernel.
But, when we try to kexec boot something other than the
Linux kernel, the cleanup cannot be relied upon.
Hence, cleanup before we go down for reboot.

Keeping the cleanup at initialization also, in case BIOS
leaves the IOMMU enabled.

I considered turning off iommu only during kexec reboot, but a clean
shutdown seems always a good idea. But if someone wants to make it
conditional, such as VMM live update, we can do that.  There doesn't
seem to be such a condition at this time.

Tested that before, the info message
'DMAR: Translation was enabled for <iommu> but we are not in kdump mode'
would be reported for each iommu. The message will not appear when the
DMA-remapping is not enabled on entry to the kernel.

Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com>
Acked-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 16:07:13 +01:00
Yian Chen
f036c7fa0a iommu/vt-d: Check VT-d RMRR region in BIOS is reported as reserved
VT-d RMRR (Reserved Memory Region Reporting) regions are reserved
for device use only and should not be part of allocable memory pool of OS.

BIOS e820_table reports complete memory map to OS, including OS usable
memory ranges and BIOS reserved memory ranges etc.

x86 BIOS may not be trusted to include RMRR regions as reserved type
of memory in its e820 memory map, hence validate every RMRR entry
with the e820 memory map to make sure the RMRR regions will not be
used by OS for any other purposes.

ia64 EFI is working fine so implement RMRR validation as a dummy function

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Sohil Mehta <sohil.mehta@intel.com>
Signed-off-by: Yian Chen <yian.chen@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 16:06:07 +01:00
Jean-Philippe Brucker
34d1b0895d iommu/arm-smmu: Remove duplicate error message
Since commit 7723f4c5ec ("driver core: platform: Add an error message
to platform_get_irq*()"), platform_get_irq() displays an error when the
IRQ isn't found. Remove the error print from the SMMU driver. Note the
slight change of behaviour: no message is printed if platform_get_irq()
returns -EPROBE_DEFER, which probably doesn't concern the SMMU.

Fixes: 7723f4c5ec ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:44:17 +01:00
Jean-Philippe Brucker
f7aff1a93f iommu/arm-smmu-v3: Don't display an error when IRQ lines are missing
Since commit 7723f4c5ec ("driver core: platform: Add an error message
to platform_get_irq*()"), platform_get_irq_byname() displays an error
when the IRQ isn't found. Since the SMMUv3 driver uses that function to
query which interrupt method is available, the message is now displayed
during boot for any SMMUv3 that doesn't implement the combined
interrupt, or that implements MSIs.

[   20.700337] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ combined not found
[   20.706508] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ eventq not found
[   20.712503] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ priq not found
[   20.718325] arm-smmu-v3 arm-smmu-v3.7.auto: IRQ gerror not found

Use platform_get_irq_byname_optional() to avoid displaying a spurious
error.

Fixes: 7723f4c5ec ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:43:56 +01:00
Yoshihiro Shimoda
1289f7f150 iommu/ipmmu-vmsa: Add utlb_offset_base
Since we will have changed memory mapping of the IPMMU in the future,
this patch adds a utlb_offset_base into struct ipmmu_features
for IMUCTR and IMUASID registers. No behavior change.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:06:32 +01:00
Yoshihiro Shimoda
3667c9978b iommu/ipmmu-vmsa: Add helper functions for "uTLB" registers
Since we will have changed memory mapping of the IPMMU in the future,
This patch adds helper functions ipmmu_utlb_reg() and
ipmmu_imu{asid,ctr}_write() for "uTLB" registers. No behavior change.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:06:27 +01:00
Yoshihiro Shimoda
3dc28d9f59 iommu/ipmmu-vmsa: Calculate context registers' offset instead of a macro
Since we will have changed memory mapping of the IPMMU in the future,
this patch uses ipmmu_features values instead of a macro to
calculate context registers offset. No behavior change.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:06:20 +01:00
Yoshihiro Shimoda
16d9454f5e iommu/ipmmu-vmsa: Add helper functions for MMU "context" registers
Since we will have changed memory mapping of the IPMMU in the future,
This patch adds helper functions ipmmu_ctx_{reg,read,write}()
for MMU "context" registers. No behavior change.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:06:14 +01:00
Yoshihiro Shimoda
df9828aaa4 iommu/ipmmu-vmsa: tidyup register definitions
To support different registers memory mapping hardware easily
in the future, this patch tidies up the register definitions
as below:
 - Add comments to state to which SoCs or SoC families they apply
 - Add categories about MMU "context" and uTLB registers

No change behavior.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:05:57 +01:00
Yoshihiro Shimoda
77cf983892 iommu/ipmmu-vmsa: Remove all unused register definitions
To support different registers memory mapping hardware easily
in the future, this patch removes all unused register
definitions.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:05:50 +01:00
Yong Wu
c90ae4a635 iommu/mediatek: Reduce the tlb flush timeout value
Reduce the tlb timeout value from 100000us to 1000us. The original value
would make the kernel stuck for 100 ms with interrupts disabled, which
could have other side effects. The flush is expected to always take much
less than 1 ms, so use that instead.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:34 +01:00
Yong Wu
60829b4d00 iommu/mediatek: Get rid of the pgtlock
Now we have tlb_lock for the HW tlb flush, then pgtable code hasn't
needed the external "pgtlock" for a while. this patch remove the
"pgtlock".

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:34 +01:00
Yong Wu
1f4fd62481 iommu/mediatek: Move the tlb_sync into tlb_flush
Right now, the tlb_add_flush_nosync and tlb_sync always appear together.
we merge the two functions into one(also move the tlb_lock into the new
function). No functional change.

Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:34 +01:00
Yong Wu
67caf7e2b5 iommu/mediatek: Delete the leaf in the tlb_flush
In our tlb range flush, we don't care the "leaf". Remove it to simplify
the code. no functional change.

"granule" also is unnecessary for us, Keep it satisfy the format of
tlb_flush_walk.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:34 +01:00
Yong Wu
a7a04ea34e iommu/mediatek: Use gather to achieve the tlb range flush
Use the iommu_gather mechanism to achieve the tlb range flush.
Gather the iova range in the "tlb_add_page", then flush the merged iova
range in iotlb_sync.

Suggested-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:34 +01:00
Yong Wu
da3cc91b8d iommu/mediatek: Add a new tlb_lock for tlb_flush
The commit 4d689b6194 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API
TLB sync") help move the tlb_sync of unmap from v7s into the iommu
framework. It helps add a new function "mtk_iommu_iotlb_sync", But it
lacked the lock, then it will cause the variable "tlb_flush_active"
may be changed unexpectedly, we could see this warning log randomly:

mtk-iommu 10205000.iommu: Partial TLB flush timed out, falling back to
full flush

The HW requires tlb_flush/tlb_sync in pairs strictly, this patch adds
a new tlb_lock for tlb operations to fix this issue.

Fixes: 4d689b6194 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:34 +01:00
Yong Wu
2009122f1d iommu/mediatek: Correct the flush_iotlb_all callback
Use the correct tlb_flush_all instead of the original one.

Fixes: 4d689b6194 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-11-11 15:02:33 +01:00
Will Deacon
dd5ddd3c7a iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve doc
The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all
users of the IOMMU API. Despite its name, the idea behind it isn't
especially tied to Qualcomm implementations and could conceivably be
used by other systems.

Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe
a bit better the idea behind it.

Cc: Robin Murphy <robin.murphy@arm.com>
Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-07 12:19:05 +00:00
Robin Murphy
205577ab6f iommu/io-pgtable-arm: Rationalise MAIR handling
Between VMSAv8-64 and the various 32-bit formats, there is either one
64-bit MAIR or a pair of 32-bit MAIR0/MAIR1 or NMRR/PMRR registers.
As such, keeping two 64-bit values in io_pgtable_cfg has always been
overkill.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:59:30 +00:00
Robin Murphy
5fb190b0b5 iommu/io-pgtable-arm: Simplify level indexing
The nature of the LPAE format means that data->pg_shift is always
redundant with data->bits_per_level, since they represent the size of a
page and the number of PTEs per page respectively, and the size of a PTE
is constant. Thus it works out more efficient to only store the latter,
and derive the former via a trivial addition where necessary.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[will: Reworked granule check in iopte_to_paddr()]
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:59:08 +00:00
Robin Murphy
c79278c185 iommu/io-pgtable-arm: Simplify PGD size handling
We use data->pgd_size directly for the one-off allocation and freeing of
the top-level table, but otherwise it serves for ARM_LPAE_PGD_IDX() to
repeatedly re-calculate the effective number of top-level address bits
it represents. Flip this around so we store the form we most commonly
need, and derive the lesser-used one instead. This cuts a whole bunch of
code out of the map/unmap/iova_to_phys fast-paths.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:34:31 +00:00
Robin Murphy
594ab90fc4 iommu/io-pgtable-arm: Simplify start level lookup
Beyond a couple of allocation-time calculations, data->levels is only
ever used to derive the start level. Storing the start level directly
leads to a small reduction in object code, which should help eke out a
little more efficiency, and slightly more readable source to boot.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:34:31 +00:00
Robin Murphy
67f3e53d2a iommu/io-pgtable-arm: Simplify bounds checks
We're merely checking that the relevant upper bits of each address
are all zero, so there are cheaper ways to achieve that.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:34:31 +00:00
Robin Murphy
f7b90d2c74 iommu/io-pgtable-arm: Rationalise size check
It makes little sense to only validate the requested size after we think
we've found a matching block size - making the check up-front is simple,
and far more logical than waiting to walk off the bottom of the table to
infer that we must have been passed a bogus size to start with.

We're missing an equivalent check on the unmap path, so add that as well
for consistency.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:34:31 +00:00
Robin Murphy
b5813c164e iommu/io-pgtable: Make selftest gubbins consistently __init
The selftests run as an initcall, but the annotation of the various
callbacks and data seems to be somewhat arbitrary. Add it consistently
for everything related to the selftests.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 19:34:31 +00:00
Will Deacon
db22a9de7a Merge branch 'for-joerg/arm-smmu/fixes' into for-joerg/arm-smmu/updates
Merge in ARM SMMU fixes to avoid conflicts in the ARM io-pgtable code.

* for-joerg/arm-smmu/fixes:
  iommu/io-pgtable-arm: Support all Mali configurations
  iommu/io-pgtable-arm: Correct Mali attributes
  iommu/arm-smmu: Free context bitmap in the err path of arm_smmu_init_domain_context
2019-11-04 19:33:59 +00:00
Vivek Gautam
759aaa10c7 iommu: arm-smmu-impl: Add sdm845 implementation hook
Add reset hook for sdm845 based platforms to turn off
the wait-for-safe sequence.

Understanding how wait-for-safe logic affects USB and UFS performance
on MTP845 and DB845 boards:

Qcom's implementation of arm,mmu-500 adds a WAIT-FOR-SAFE logic
to address under-performance issues in real-time clients, such as
Display, and Camera.
On receiving an invalidation requests, the SMMU forwards SAFE request
to these clients and waits for SAFE ack signal from real-time clients.
The SAFE signal from such clients is used to qualify the start of
invalidation.
This logic is controlled by chicken bits, one for each - MDP (display),
IFE0, and IFE1 (camera), that can be accessed only from secure software
on sdm845.

This configuration, however, degrades the performance of non-real time
clients, such as USB, and UFS etc. This happens because, with wait-for-safe
logic enabled the hardware tries to throttle non-real time clients while
waiting for SAFE ack signals from real-time clients.

On mtp845 and db845 devices, with wait-for-safe logic enabled by the
bootloaders we see degraded performance of USB and UFS when kernel
enables the smmu stage-1 translations for these clients.
Turn off this wait-for-safe logic from the kernel gets us back the perf
of USB and UFS devices until we re-visit this when we start seeing perf
issues on display/camera on upstream supported SDM845 platforms.
The bootloaders on these boards implement secure monitor callbacks to
handle a specific command - QCOM_SCM_SVC_SMMU_PROGRAM with which the
logic can be toggled.

There are other boards such as cheza whose bootloaders don't enable this
logic. Such boards don't implement callbacks to handle the specific SCM
call so disabling this logic for such boards will be a no-op.

This change is inspired by the downstream change from Patrick Daly
to address performance issues with display and camera by handling
this wait-for-safe within separte io-pagetable ops to do TLB
maintenance. So a big thanks to him for the change and for all the
offline discussions.

Without this change the UFS reads are pretty slow:
$ time dd if=/dev/sda of=/dev/zero bs=1048576 count=10 conv=sync
10+0 records in
10+0 records out
10485760 bytes (10.0MB) copied, 22.394903 seconds, 457.2KB/s
real    0m 22.39s
user    0m 0.00s
sys     0m 0.01s

With this change they are back to rock!
$ time dd if=/dev/sda of=/dev/zero bs=1048576 count=300 conv=sync
300+0 records in
300+0 records out
314572800 bytes (300.0MB) copied, 1.030541 seconds, 291.1MB/s
real    0m 1.03s
user    0m 0.00s
sys     0m 0.54s

Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04 17:48:37 +00:00
Rob Clark
ee9bdfedd3 iommu/arm-smmu: Avoid pathological RPM behaviour for unmaps
When games, browser, or anything using a lot of GPU buffers exits, there
can be many hundreds or thousands of buffers to unmap and free.  If the
GPU is otherwise suspended, this can cause arm-smmu to resume/suspend
for each buffer, resulting 5-10 seconds worth of reprogramming the
context bank (arm_smmu_write_context_bank()/arm_smmu_write_s2cr()/etc).
To the user it would appear that the system just locked up.

A simple solution is to use pm_runtime_put_autosuspend() instead, so we
don't immediately suspend the SMMU device.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Signed-off-by: Will Deacon <will@kernel.org>
2019-11-01 16:28:39 +00:00
Cristiane Naves
c1c8058dfb iommu/virtio: Remove unused variable
Remove the variable of return. Issue found by
coccicheck(scripts/coccinelle/misc/returnvar.cocci)

Signed-off-by: Cristiane Naves <cristianenavescardoso09@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:42:47 +01:00
Logan Gunthorpe
3c124435e8 iommu/amd: Support multiple PCI DMA aliases in IRQ Remapping
Non-Transparent Bridge (NTB) devices (among others) may have many DMA
aliases seeing the hardware will send requests with different device ids
depending on their origin across the bridged hardware.

See commit ad281ecf1c ("PCI: Add DMA alias quirk for Microsemi Switchtec
NTB") for more information on this.

The AMD IOMMU IRQ remapping functionality ignores all PCI aliases for
IRQs so if devices send an interrupt from one of their aliases they
will be blocked on AMD hardware with the IOMMU enabled.

To fix this, ensure IRQ remapping is enabled for all aliases with
MSI interrupts.

This is analogous to the functionality added to the Intel IRQ remapping
code in commit 3f0c625c6a ("iommu/vt-d: Allow interrupts from the entire
bus for aliased devices")

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:36:19 +01:00
Logan Gunthorpe
3332364e4e iommu/amd: Support multiple PCI DMA aliases in device table
Non-Transparent Bridge (NTB) devices (among others) may have many DMA
aliases seeing the hardware will send requests with different device ids
depending on their origin across the bridged hardware.

See commit ad281ecf1c ("PCI: Add DMA alias quirk for Microsemi
Switchtec NTB") for more information on this.

The AMD IOMMU ignores all the PCI aliases except the last one so DMA
transfers from these aliases will be blocked on AMD hardware with the
IOMMU enabled.

To fix this, ensure the DTEs are cloned for every PCI alias. This is
done by copying the DTE data for each alias as well as the IVRS alias
every time it is changed.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:36:19 +01:00
John Donnelly
160c63f909 iommu/vt-d: Fix panic after kexec -p for kdump
This cures a panic on restart after a kexec operation on 5.3 and 5.4
kernels.

The underlying state of the iommu registers (iommu->flags &
VTD_FLAG_TRANS_PRE_ENABLED) on a restart results in a domain being marked as
"DEFER_DEVICE_DOMAIN_INFO" that produces an Oops in identity_mapping().

[   43.654737] BUG: kernel NULL pointer dereference, address:
0000000000000056
[   43.655720] #PF: supervisor read access in kernel mode
[   43.655720] #PF: error_code(0x0000) - not-present page
[   43.655720] PGD 0 P4D 0
[   43.655720] Oops: 0000 [#1] SMP PTI
[   43.655720] CPU: 0 PID: 1 Comm: swapper/0 Not tainted
5.3.2-1940.el8uek.x86_64 #1
[   43.655720] Hardware name: Oracle Corporation ORACLE SERVER
X5-2/ASM,MOTHERBOARD,1U, BIOS 30140300 09/20/2018
[   43.655720] RIP: 0010:iommu_need_mapping+0x29/0xd0
[   43.655720] Code: 00 0f 1f 44 00 00 48 8b 97 70 02 00 00 48 83 fa ff
74 53 48 8d 4a ff b8 01 00 00 00 48 83 f9 fd 76 01 c3 48 8b 35 7f 58 e0
01 <48> 39 72 58 75 f2 55 48 89 e5 41 54 53 48 8b 87 28 02 00 00 4c 8b
[   43.655720] RSP: 0018:ffffc9000001b9b0 EFLAGS: 00010246
[   43.655720] RAX: 0000000000000001 RBX: 0000000000001000 RCX:
fffffffffffffffd
[   43.655720] RDX: fffffffffffffffe RSI: ffff8880719b8000 RDI:
ffff8880477460b0
[   43.655720] RBP: ffffc9000001b9e8 R08: 0000000000000000 R09:
ffff888047c01700
[   43.655720] R10: 00002194036fc692 R11: 0000000000000000 R12:
0000000000000000
[   43.655720] R13: ffff8880477460b0 R14: 0000000000000cc0 R15:
ffff888072d2b558
[   43.655720] FS:  0000000000000000(0000) GS:ffff888071c00000(0000)
knlGS:0000000000000000
[   43.655720] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   43.655720] CR2: 0000000000000056 CR3: 000000007440a002 CR4:
00000000001606b0
[   43.655720] Call Trace:
[   43.655720]  ? intel_alloc_coherent+0x2a/0x180
[   43.655720]  ? __schedule+0x2c2/0x650
[   43.655720]  dma_alloc_attrs+0x8c/0xd0
[   43.655720]  dma_pool_alloc+0xdf/0x200
[   43.655720]  ehci_qh_alloc+0x58/0x130
[   43.655720]  ehci_setup+0x287/0x7ba
[   43.655720]  ? _dev_info+0x6c/0x83
[   43.655720]  ehci_pci_setup+0x91/0x436
[   43.655720]  usb_add_hcd.cold.48+0x1d4/0x754
[   43.655720]  usb_hcd_pci_probe+0x2bc/0x3f0
[   43.655720]  ehci_pci_probe+0x39/0x40
[   43.655720]  local_pci_probe+0x47/0x80
[   43.655720]  pci_device_probe+0xff/0x1b0
[   43.655720]  really_probe+0xf5/0x3a0
[   43.655720]  driver_probe_device+0xbb/0x100
[   43.655720]  device_driver_attach+0x58/0x60
[   43.655720]  __driver_attach+0x8f/0x150
[   43.655720]  ? device_driver_attach+0x60/0x60
[   43.655720]  bus_for_each_dev+0x74/0xb0
[   43.655720]  driver_attach+0x1e/0x20
[   43.655720]  bus_add_driver+0x151/0x1f0
[   43.655720]  ? ehci_hcd_init+0xb2/0xb2
[   43.655720]  ? do_early_param+0x95/0x95
[   43.655720]  driver_register+0x70/0xc0
[   43.655720]  ? ehci_hcd_init+0xb2/0xb2
[   43.655720]  __pci_register_driver+0x57/0x60
[   43.655720]  ehci_pci_init+0x6a/0x6c
[   43.655720]  do_one_initcall+0x4a/0x1fa
[   43.655720]  ? do_early_param+0x95/0x95
[   43.655720]  kernel_init_freeable+0x1bd/0x262
[   43.655720]  ? rest_init+0xb0/0xb0
[   43.655720]  kernel_init+0xe/0x110
[   43.655720]  ret_from_fork+0x24/0x50

Fixes: 8af46c784e ("iommu/vt-d: Implement is_attach_deferred iommu ops entry")
Cc: stable@vger.kernel.org # v5.3+

Signed-off-by: John Donnelly <john.p.donnelly@oracle.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:30:22 +01:00
Takashi Iwai
ad3e8da2d4 iommu/amd: Apply the same IVRS IOAPIC workaround to Acer Aspire A315-41
Acer Aspire A315-41 requires the very same workaround as the existing
quirk for Dell Latitude 5495.  Add the new entry for that.

BugLink: https://bugzilla.suse.com/show_bug.cgi?id=1137799
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:24:03 +01:00
Denys Vlasenko
a5bbbf37c6 iommu/amd: Do not re-fetch iommu->cmd_buf_tail
The compiler is not smart enough to realize that iommu->cmd_buf_tail
can't be modified across memcpy:

41 8b 45 74          mov    0x74(%r13),%eax   # iommu->cmd_buf_tail
44 8d 78 10          lea    0x10(%rax),%r15d  # += sizeof(*cmd)
41 81 e7 ff 1f 00 00 and    $0x1fff,%r15d     # %= CMD_BUFFER_SIZE
49 03 45 68          add    0x68(%r13),%rax   # target = iommu->cmd_buf + iommu->cmd_buf_tail
45 89 7d 74          mov    %r15d,0x74(%r13)  # store to iommu->cmd_buf_tail
49 8b 34 24          mov    (%r12),%rsi       # memcpy
49 8b 7c 24 08       mov    0x8(%r12),%rdi    # memcpy
48 89 30             mov    %rsi,(%rax)       # memcpy
48 89 78 08          mov    %rdi,0x8(%rax)    # memcpy
49 8b 55 38          mov    0x38(%r13),%rdx   # iommu->mmio_base
41 8b 45 74          mov    0x74(%r13),%eax   # redundant load of iommu->cmd_buf_tail
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
89 82 08 20 00 00    mov    %eax,0x2008(%rdx) # writel

CC: Tom Lendacky <thomas.lendacky@amd.com>
CC: Joerg Roedel <jroedel@suse.de>
CC: linux-kernel@vger.kernel.org
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:19:49 +01:00
YueHaibing
565d454280 iommu/ipmmu-vmsa: Remove dev_err() on platform_get_irq() failure
platform_get_irq() will call dev_err() itself on failure,
so there is no need for the driver to also do this.
This is detected by coccinelle.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-30 10:16:37 +01:00
Linus Torvalds
964f9cfaae dma-mapping fix for 5.4
- fix a regression in the intel-iommu get_required_mask conversion
    (Arvind Sankar)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl2z3OELHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYOyJhAAwQalrpmmP4NLzx8J29ZLLFTpvMkQu7khHXxnfXd0
 +/aKg6g1teyJvg/Vxb8GQeGi/mTKClCjUvlS+88AQm1vR/9wLb4OvQPcHhNmG84s
 YxJGeDcIQzeXIpV1s6bcADcIAoYHZB1Ph0VobQeSNiJEAq4/ILCUfsVgcPbOPdDQ
 49/b8jrGXk/A/MMzJo2YefqQec2D5/7LCEK++IZAOnlnL/hd+YiB8Y1W8tjAMwXO
 ANOwpRGD+tUfjlP6DvuIbefPGVW5B0fdSa04KYqg03bZOSVTThNCSdWqXTcaOXFu
 MmAHhzrRiUH184d69pjWM371Qx6dF+fallkezrZXVqyInVww9Vca708sJPP3w9YD
 QjP2eYy1xrcPI9e84Xqad8o6TRr+wzmtQIHNRcm9ZrZhi/fdjUKPeBZkBbeOpGcd
 CLaqV8lVOFtVEHqtUq9egJ77FUdmCvDpaz7XaT3o33b8Wl70cF5G1/J4+CYkMHWM
 y67h7GpBaay7d6ZJyLbtqB29AM0PQnftJRZfef1dP3hGZKswZYuDuseMfkrMwPzt
 6MRWpSN+kn4B4HugO+W8OXVO2heZFb7sqs7BwfjgWWOAn5NN9Jvq2s1PYKj0CluR
 wB8NAhulNrkVslUk3Mx0baPDhO9ut3bMXRhVJcpXFJV23oz0HA1qqxD37vDglRHd
 aNM=
 =jbwS
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-5.4-2' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping fix from Christoph Hellwig:
 "Fix a regression in the intel-iommu get_required_mask conversion
  (Arvind Sankar)"

* tag 'dma-mapping-5.4-2' of git://git.infradead.org/users/hch/dma-mapping:
  iommu/vt-d: Return the correct dma mask when we are bypassing the IOMMU
2019-10-26 06:29:04 -04:00
Arvind Sankar
9c24eaf81c iommu/vt-d: Return the correct dma mask when we are bypassing the IOMMU
We must return a mask covering the full physical RAM when bypassing the
IOMMU mapping. Also, in iommu_need_mapping, we need to check using
dma_direct_get_required_mask to ensure that the device's dma_mask can
cover physical RAM before deciding to bypass IOMMU mapping.

Based on an earlier patch from Christoph Hellwig.

Fixes: 249baa5479 ("dma-mapping: provide a better default ->get_required_mask")
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-10-18 17:19:20 +02:00
Joerg Roedel
46ac18c347 iommu/amd: Check PM_LEVEL_SIZE() condition in locked section
The increase_address_space() function has to check the PM_LEVEL_SIZE()
condition again under the domain->lock to avoid a false trigger of the
WARN_ON_ONCE() and to avoid that the address space is increase more
often than necessary.

Reported-by: Qian Cai <cai@lca.pw>
Fixes: 754265bcab ("iommu/amd: Fix race in increase_address_space()")
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 16:52:37 +02:00
Thierry Reding
96d3ab802e iommu/tegra-smmu: Fix page tables in > 4 GiB memory
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
and can cause crashes when passing that address to the DMA API.

Fix this by first casting the PDE value to a dma_addr_t and then using
the page frame number mask for the SMMU instance to mask out the invalid
bits, which are typically used for mapping attributes, etc.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:46:11 +02:00
Navneet Kumar
e31e592954 iommu/tegra-smmu: Fix client enablement order
Enable clients' translation only after setting up the swgroups.

Signed-off-by: Navneet Kumar <navneetk@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:46:11 +02:00
Navneet Kumar
446152d5b6 iommu/tegra-smmu: Use non-secure register for flushing
Use PTB_ASID instead of SMMU_CONFIG to flush smmu.
PTB_ASID can be accessed from non-secure mode, SMMU_CONFIG cannot be.
Using SMMU_CONFIG could pose a problem when kernel doesn't have secure
mode access enabled from boot.

Signed-off-by: Navneet Kumar <navneetk@nvidia.com>
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:46:11 +02:00
Joerg Roedel
3057fb9377 iommu/amd: Pass gfp flags to iommu_map_page() in amd_iommu_map()
A recent commit added a gfp parameter to amd_iommu_map() to make it
callable from atomic context, but forgot to pass it down to
iommu_map_page() and left GFP_KERNEL there. This caused
sleep-while-atomic warnings and needs to be fixed.

Reported-by: Qian Cai <cai@lca.pw>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 781ca2de89 ("iommu: Add gfp parameter to iommu_ops::map")
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:40:38 +02:00
Ezequiel Garcia
42bb97b80f iommu: rockchip: Free domain on .domain_free
IOMMU domain resource life is well-defined, managed
by .domain_alloc and .domain_free.

Therefore, domain-specific resources shouldn't be tied to
the device life, but instead to its domain.

Signed-off-by: Ezequiel Garcia <ezequiel@collabora.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-16 09:43:07 +02:00
Bjorn Helgaas
fd872843ec iommu/vt-d: Select PCI_PRI for INTEL_IOMMU_SVM
Previously intel-iommu.c depended on CONFIG_AMD_IOMMU in an undesirable
way.  When CONFIG_INTEL_IOMMU_SVM=y, iommu_enable_dev_iotlb() calls PRI
interfaces (pci_reset_pri() and pci_enable_pri()), but those are only
implemented when CONFIG_PCI_PRI is enabled.

The INTEL_IOMMU_SVM Kconfig did nothing with PCI_PRI, but AMD_IOMMU selects
PCI_PRI.  So if AMD_IOMMU was enabled, intel-iommu.c got the full PRI
interfaces, but if AMD_IOMMU was not enabled, it got the PRI stubs.

Make the iommu_enable_dev_iotlb() behavior independent of AMD_IOMMU by
having INTEL_IOMMU_SVM select PCI_PRI so iommu_enable_dev_iotlb() always
uses the full implementations of PRI interfaces.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 16:37:10 -05:00
Suthikulpanit, Suravee
470eb3b311 iommu/amd: Simpify decoding logic for INVALID_PPR_REQUEST event
Reuse existing macro to simplify the code and improve readability.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 14:13:55 +02:00
Suthikulpanit, Suravee
ec21f17a94 iommu/amd: Fix incorrect PASID decoding from event log
IOMMU Event Log encodes 20-bit PASID for events:
    ILLEGAL_DEV_TABLE_ENTRY
    IO_PAGE_FAULT
    PAGE_TAB_HARDWARE_ERROR
    INVALID_DEVICE_REQUEST
as:
    PASID[15:0]  = bit 47:32
    PASID[19:16] = bit 19:16

Note that INVALID_PPR_REQUEST event has different encoding
from the rest of the events as the following:
    PASID[15:0]  = bit 31:16
    PASID[19:16] = bit 45:42

So, fixes the decoding logic.

Fixes: d64c0486ed ("iommu/amd: Update the PASID information printed to the system log")
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 14:13:31 +02:00
Jacob Pan
808be0aae5 iommu: Introduce guest PASID bind function
Guest shared virtual address (SVA) may require host to shadow guest
PASID tables. Guest PASID can also be allocated from the host via
enlightened interfaces. In this case, guest needs to bind the guest
mm, i.e. cr3 in guest physical address to the actual PASID table in
the host IOMMU. Nesting will be turned on such that guest virtual
address can go through a two level translation:
- 1st level translates GVA to GPA
- 2nd level translates GPA to HPA
This patch introduces APIs to bind guest PASID data to the assigned
device entry in the physical IOMMU. See the diagram below for usage
explanation.

    .-------------.  .---------------------------.
    |   vIOMMU    |  | Guest process mm, FL only |
    |             |  '---------------------------'
    .----------------/
    | PASID Entry |--- PASID cache flush -
    '-------------'                       |
    |             |                       V
    |             |                      GP
    '-------------'
Guest
------| Shadow |----------------------- GP->HP* ---------
      v        v                          |
Host                                      v
    .-------------.  .----------------------.
    |   pIOMMU    |  | Bind FL for GVA-GPA  |
    |             |  '----------------------'
    .----------------/  |
    | PASID Entry |     V (Nested xlate)
    '----------------\.---------------------.
    |             |   |Set SL to GPA-HPA    |
    |             |   '---------------------'
    '-------------'

Where:
 - FL = First level/stage one page tables
 - SL = Second level/stage two page tables
 - GP = Guest PASID
 - HP = Host PASID
* Conversion needed if non-identity GP-HP mapping option is chosen.

Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 13:34:43 +02:00
Jacob Pan
e5c0bd7f22 iommu/ioasid: Add custom allocators
IOASID allocation may rely on platform specific methods. One use case is
that when running in the guest, in order to obtain system wide global
IOASIDs, emulated allocation interface is needed to communicate with the
host. Here we call these platform specific allocators custom allocators.

Custom IOASID allocators can be registered at runtime and take precedence
over the default XArray allocator. They have these attributes:

- provides platform specific alloc()/free() functions with private data.
- allocation results lookup are not provided by the allocator, lookup
  request must be done by the IOASID framework by its own XArray.
- allocators can be unregistered at runtime, either fallback to the next
  custom allocator or to the default allocator.
- custom allocators can share the same set of alloc()/free() helpers, in
  this case they also share the same IOASID space, thus the same XArray.
- switching between allocators requires all outstanding IOASIDs to be
  freed unless the two allocators share the same alloc()/free() helpers.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Link: https://lkml.org/lkml/2019/4/26/462
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 13:34:25 +02:00
Jean-Philippe Brucker
fa83433c92 iommu: Add I/O ASID allocator
Some devices might support multiple DMA address spaces, in particular
those that have the PCI PASID feature. PASID (Process Address Space ID)
allows to share process address spaces with devices (SVA), partition a
device into VM-assignable entities (VFIO mdev) or simply provide
multiple DMA address space to kernel drivers. Add a global PASID
allocator usable by different drivers at the same time. Name it I/O ASID
to avoid confusion with ASIDs allocated by arch code, which are usually
a separate ID space.

The IOASID space is global. Each device can have its own PASID space,
but by convention the IOMMU ended up having a global PASID space, so
that with SVA, each mm_struct is associated to a single PASID.

The allocator is primarily used by IOMMU subsystem but in rare occasions
drivers would like to allocate PASIDs for devices that aren't managed by
an IOMMU, using the same ID space as IOMMU.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 13:34:16 +02:00
Yi L Liu
4c7c171f85 iommu: Introduce cache_invalidate API
In any virtualization use case, when the first translation stage
is "owned" by the guest OS, the host IOMMU driver has no knowledge
of caching structure updates unless the guest invalidation activities
are trapped by the virtualizer and passed down to the host.

Since the invalidation data can be obtained from user space and will be
written into physical IOMMU, we must allow security check at various
layers. Therefore, generic invalidation data format are proposed here,
model specific IOMMU drivers need to convert them into their own format.

Signed-off-by: Yi L Liu <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 13:34:04 +02:00
Geert Uytterhoeven
ec37d4e999 iommu/ipmmu-vmsa: Only call platform_get_irq() when interrupt is mandatory
As platform_get_irq() now prints an error when the interrupt does not
exist, calling it gratuitously causes scary messages like:

    ipmmu-vmsa e6740000.mmu: IRQ index 0 not found

Fix this by moving the call to platform_get_irq() down, where the
existence of the interrupt is mandatory.

Fixes: 7723f4c5ec ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Tested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 13:00:43 +02:00
Biju Das
757f26a3a9 iommu/ipmmu-vmsa: Hook up r8a774b1 DT matching code
Support RZ/G2N (R8A774B1) IPMMU.

Signed-off-by: Biju Das <biju.das@bp.renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 12:58:13 +02:00
Heiko Stuebner
f9258156c7 iommu/rockchip: Don't use platform_get_irq to implicitly count irqs
Till now the Rockchip iommu driver walked through the irq list via
platform_get_irq() until it encountered an ENXIO error. With the
recent change to add a central error message, this always results
in such an error for each iommu on probe and shutdown.

To not confuse people, switch to platform_count_irqs() to get the
actual number of interrupts before walking through them.

Fixes: 7723f4c5ec ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 12:45:16 +02:00
Andy Shevchenko
ae5e6c6439 iommu/amd: Switch to use acpi_dev_hid_uid_match()
Since we have a generic helper, drop custom implementation in the driver.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-10-15 12:06:23 +02:00
Lu Baolu
1ee0186b9a iommu/vt-d: Refactor find_domain() helper
Current find_domain() helper checks and does the deferred domain
attachment and return the domain in use. This isn't always the
use case for the callers. Some callers only want to retrieve the
current domain in use.

This refactors find_domain() into two helpers: 1) find_domain()
only returns the domain in use; 2) deferred_attach_domain() does
the deferred domain attachment if required and return the domain
in use.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:52:25 +02:00
Christophe JAILLET
da6b05dce2 iommu/qcom: Simplify a test in 'qcom_iommu_add_device()'
'iommu_group_get_for_dev()' never returns NULL, so this test can be
simplified a bit.

This way, the test is consistent with all other calls to
'iommu_group_get_for_dev()'.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:50:27 +02:00
Tom Murphy
be62dbf554 iommu/amd: Convert AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:31:04 +02:00
Tom Murphy
6e2350207f iommu/dma-iommu: Use the dev->coherent_dma_mask
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:31:04 +02:00
Tom Murphy
795bbbb9b6 iommu/dma-iommu: Handle deferred devices
Handle devices which defer their attach to the iommu in the dma-iommu api

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:31:04 +02:00
Tom Murphy
781ca2de89 iommu: Add gfp parameter to iommu_ops::map
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.

The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
should probably have had a "might_sleep()" since it was written. However
currently the dma-iommu api can call iommu_map in an atomic context,
which it shouldn't do. This doesn't cause any problems because any iommu
driver which uses the dma-iommu api uses gfp_atomic in it's
iommu_ops::map function. But doing this wastes the memory allocators
atomic pools.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:31:04 +02:00
Tom Murphy
37ec8eb851 iommu/amd: Remove unnecessary locking from AMD iommu driver
With or without locking it doesn't make sense for two writers to be
writing to the same IOVA range at the same time. Even with locking we
still have a race condition, whoever gets the lock first, so we still
can't be sure what the result will be. With locking the result will be
more sane, it will be correct for the last writer, but still useless
because we can't be sure which writer will get the lock last. It's a
fundamentally broken design to have two writers writing to the same
IOVA range at the same time.

So we can remove the locking and work on the assumption that no two
writers will be writing to the same IOVA range at the same time.

The only exception is when we have to allocate a middle page in the page
tables, the middle page can cover more than just the IOVA range a writer
has been allocated. However this isn't an issue in the AMD driver
because it can atomically allocate middle pages using "cmpxchg64()".

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:31:03 +02:00
Christophe JAILLET
bdde4718ab iommu/arm-smmu: Axe a useless test in 'arm_smmu_master_alloc_smes()'
'iommu_group_get_for_dev()' never returns NULL, so this test can be removed.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:36:11 +01:00
Christophe JAILLET
9062c1d0be iommu/io-pgtable: Move some initialization data to .init.rodata
The memory used by '__init' functions can be freed once the initialization
phase has been performed.

Mark some 'static const' array defined and used within some '__init'
functions as '__initconst', so that the corresponding data can also be
discarded.

Without '__initconst', the data are put in the .rodata section.
With the qualifier, they are put in the .init.rodata section.

With gcc 8.3.0, the following changes have been measured:

Without '__initconst':
   section      size
  .rodata       00000720
  .init.rodata  00000018

With '__initconst':
   section      size
  .rodata       00000660
  .init.rodata  00000058

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:36:10 +01:00
Robin Murphy
931a0ba638 iommu/arm-smmu: Report USF more clearly
Although CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT is a welcome tool
for smoking out inadequate firmware, the failure mode is non-obvious
and can be confusing for end users. Add some special-case reporting of
Unidentified Stream Faults to help clarify this particular symptom.
Since we're adding yet another print to the mix, also break out an
explicit ratelimit state to make sure everything stays together (and
reduce the static storage footprint a little).

Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:17:40 +01:00
Robin Murphy
696bcfb709 iommu/arm-smmu: Remove arm_smmu_flush_ops
Now it's just an empty wrapper.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:17:39 +01:00
Robin Murphy
ae2b60f34a iommu/arm-smmu: Move .tlb_sync method to implementation
With the .tlb_sync interface no longer exposed directly to io-pgtable,
strip away the remains of that abstraction layer. Retain the callback
in spirit, though, by transforming it into an implementation override
for the low-level sync routine itself, for which we will have at least
one user.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:17:39 +01:00
Robin Murphy
3370cb6bf6 iommu/arm-smmu: Remove "leaf" indirection
Now that the "leaf" flag is no longer part of an external interface,
there's no need to use it to infer a register offset at runtime when
we can just as easily encode the offset directly in its place.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:17:39 +01:00
Robin Murphy
3f3b8d0c9c iommu/arm-smmu: Remove .tlb_inv_range indirection
Fill in 'native' iommu_flush_ops callbacks for all the
arm_smmu_flush_ops variants, and clear up the remains of the previous
.tlb_inv_range abstraction.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:17:39 +01:00
Robin Murphy
1be08f458d iommu/io-pgtable-arm: Support all Mali configurations
In principle, Midgard GPUs supporting smaller VA sizes should only
require 3-level pagetables, since level 0 only resolves bits 48:40 of
the address. However, the kbase driver does not appear to have any
notion of a variable start level, and empirically T720 and T820 rapidly
blow up with translation faults unless given a full 4-level table,
despite only supporting a 33-bit VA size.

The 'real' IAS value is still valuable in terms of validating addresses
on map/unmap, so tweak the allocator to allow smaller values while still
forcing the resultant tables to the full 4 levels. As far as I can test,
this should make all known Midgard variants happy.

Fixes: d08d42de64 ("iommu: io-pgtable: Add ARM Mali midgard MMU page table format")
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:16:47 +01:00
Robin Murphy
52f325f4eb iommu/io-pgtable-arm: Correct Mali attributes
Whilst Midgard's MEMATTR follows a similar principle to the VMSA MAIR,
the actual attribute values differ, so although it currently appears to
work to some degree, we probably shouldn't be using our standard stage 1
MAIR for that. Instead, generate a reasonable MEMATTR with attribute
values borrowed from the kbase driver; at this point we'll be overriding
or ignoring pretty much all of the LPAE config, so just implement these
Mali details in a dedicated allocator instead of pretending to subclass
the standard VMSA format.

Fixes: d08d42de64 ("iommu: io-pgtable: Add ARM Mali midgard MMU page table format")
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:16:46 +01:00
Liu Xiang
6db7bfb431 iommu/arm-smmu: Free context bitmap in the err path of arm_smmu_init_domain_context
When alloc_io_pgtable_ops is failed, context bitmap which is just allocated
by __arm_smmu_alloc_bitmap should be freed to release the resource.

Signed-off-by: Liu Xiang <liuxiang_1999@126.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01 12:13:16 +01:00
Linus Torvalds
4d2af08ed0 IOMMU Fixes for Linux v5.4-rc1
A couple of fixes for the AMD IOMMU driver have piled up:
 
 	* Some fixes for the reworked IO page-table which caused memory
 	  leaks or did not allow to downgrade mappings under some
 	  conditions.
 
 	* Locking fixes to fix a couple of possible races around
 	  accessing 'struct protection_domain'. The races got introduced
 	  when the dma-ops path became lock-less in the fast-path.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAl2PrpoACgkQK/BELZcB
 GuNo6A/9EpxNUllqaPLvGYJYPN1ye2kx9QOCYZW6vo+at10X9ywf69IqYtjP9cSe
 x5uWUy0BFjBhqHvMvQ+9m6begFsue/+csUZDmeP+KvBHwNxUOxFS/fb4P0WlmmNF
 /zzsjQbt+r1FRIdYodH2CvBJKyuxNxou0W1aARvs9iggoXVG5Es+WG9+kwnixBE+
 WB1gpuX0zKWlu31z2+i+JrVtdjMqoupfR/T40C4OsMD3NjfNi0bkCqmnqJ3CpNh9
 RWPmNlnd29imPhMYQonZcUFD6Ru4NOUCfEFCjHEK/nk9kSHMYjgkKFgOzvA8h1xG
 Nkzd0dRw39UMNYzKDGHHaE/xXRJV+kOFxZBcABnxfx2r+9EgXBD36AUOsfpeOdVi
 9ab75ok7Ly+tkCgdK7sEeuDD0HJiZkUYT7BqMTdBOt64BK/GtRvepF1Zv15hG6Xn
 imlAfyE4q+avTAJkrXeIu6IgdvF4XvorsIdeF5dKjCBTdTkj8DLXq/gejAo0g1NO
 shOz9E2lde1IdeT+U580nZy9JmkKDFjyeG4QkwSz7Oln/gHIFQS1K8A4i30kGiok
 vMsJzBidtUuqRWupwymtobCAggZE86O2XLOwnxolarJAFOqg5V2j7fSyL+XxXUDC
 r85Ve/jtAhMho5594X72CumoNzzr0bDyCcGerzvT0wBRXcKLIsw=
 =xajX
 -----END PGP SIGNATURE-----

Merge tag 'iommu-fixes-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu fixes from Joerg Roedel:
 "A couple of fixes for the AMD IOMMU driver have piled up:

   - Some fixes for the reworked IO page-table which caused memory leaks
     or did not allow to downgrade mappings under some conditions.

   - Locking fixes to fix a couple of possible races around accessing
     'struct protection_domain'. The races got introduced when the
     dma-ops path became lock-less in the fast-path"

* tag 'iommu-fixes-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
  iommu/amd: Lock code paths traversing protection_domain->dev_list
  iommu/amd: Lock dev_data in attach/detach code paths
  iommu/amd: Check for busy devices earlier in attach_device()
  iommu/amd: Take domain->lock for complete attach/detach path
  iommu/amd: Remove amd_iommu_devtable_lock
  iommu/amd: Remove domain->updated
  iommu/amd: Wait for completion of IOTLB flush in attach_device
  iommu/amd: Unmap all L7 PTEs when downgrading page-sizes
  iommu/amd: Introduce first_pte_l7() helper
  iommu/amd: Fix downgrading default page-sizes in alloc_pte()
  iommu/amd: Fix pages leak in free_pagetable()
2019-09-29 10:00:14 -07:00
Joerg Roedel
2a78f99625 iommu/amd: Lock code paths traversing protection_domain->dev_list
The traversing of this list requires protection_domain->lock to be taken
to avoid nasty races with attach/detach code. Make sure the lock is held
on all code-paths traversing this list.

Reported-by: Filippo Sironi <sironi@amazon.de>
Fixes: 92d420ec02 ("iommu/amd: Relax locking in dma_ops path")
Reviewed-by: Filippo Sironi <sironi@amazon.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-28 14:44:13 +02:00
Joerg Roedel
ab7b2577f0 iommu/amd: Lock dev_data in attach/detach code paths
Make sure that attaching a detaching a device can't race against each
other and protect the iommu_dev_data with a spin_lock in these code
paths.

Fixes: 92d420ec02 ("iommu/amd: Relax locking in dma_ops path")
Reviewed-by: Filippo Sironi <sironi@amazon.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-28 14:44:04 +02:00
Joerg Roedel
45e528d9c4 iommu/amd: Check for busy devices earlier in attach_device()
Check early in attach_device whether the device is already attached to a
domain. This also simplifies the code path so that __attach_device() can
be removed.

Fixes: 92d420ec02 ("iommu/amd: Relax locking in dma_ops path")
Reviewed-by: Filippo Sironi <sironi@amazon.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-28 14:43:58 +02:00
Joerg Roedel
f6c0bfce27 iommu/amd: Take domain->lock for complete attach/detach path
The code-paths before __attach_device() and __detach_device() are called
also access and modify domain state, so take the domain lock there too.
This allows to get rid of the __detach_device() function.

Fixes: 92d420ec02 ("iommu/amd: Relax locking in dma_ops path")
Reviewed-by: Filippo Sironi <sironi@amazon.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-28 14:43:52 +02:00
Joerg Roedel
3a11905b69 iommu/amd: Remove amd_iommu_devtable_lock
The lock is not necessary because the device table does not
contain shared state that needs protection. Locking is only
needed on an individual entry basis, and that needs to
happen on the iommu_dev_data level.

Fixes: 92d420ec02 ("iommu/amd: Relax locking in dma_ops path")
Reviewed-by: Filippo Sironi <sironi@amazon.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-28 14:43:46 +02:00
Joerg Roedel
f15d9a992f iommu/amd: Remove domain->updated
This struct member was used to track whether a domain
change requires updates to the device-table and IOMMU cache
flushes. The problem is, that access to this field is racy
since locking in the common mapping code-paths has been
eliminated.

Move the updated field to the stack to get rid of all
potential races and remove the field from the struct.

Fixes: 92d420ec02 ("iommu/amd: Relax locking in dma_ops path")
Reviewed-by: Filippo Sironi <sironi@amazon.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-28 14:43:36 +02:00
Filippo Sironi
0b15e02f0c iommu/amd: Wait for completion of IOTLB flush in attach_device
To make sure the domain tlb flush completes before the
function returns, explicitly wait for its completion.

Signed-off-by: Filippo Sironi <sironi@amazon.de>
Fixes: 42a49f965a ("amd-iommu: flush domain tlb when attaching a new device")
[joro: Added commit message and fixes tag]
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-24 11:40:04 +02:00
Andrei Dulea
cc449541f2 iommu/amd: Unmap all L7 PTEs when downgrading page-sizes
When replacing a large mapping created with page-mode 7 (i.e.
non-default page size), tear down the entire series of replicated PTEs.
Besides providing access to the old mapping, another thing that might go
wrong with this issue is on the fetch_pte() code path that can return a
PDE entry of the newly re-mapped range.

While at it, make sure that we flush the TLB in case alloc_pte() fails
and returns NULL at a lower level.

Fixes: 6d568ef9a6 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()")
Signed-off-by: Andrei Dulea <adulea@amazon.de>
2019-09-24 11:15:51 +02:00
Andrei Dulea
7f1f1683c1 iommu/amd: Introduce first_pte_l7() helper
Given an arbitrary pte that is part of a large mapping, this function
returns the first pte of the series (and optionally the mapped size and
number of PTEs)
It will be re-used in a subsequent patch to replace an existing L7
mapping.

Fixes: 6d568ef9a6 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()")
Signed-off-by: Andrei Dulea <adulea@amazon.de>
2019-09-24 11:15:46 +02:00
Andrei Dulea
6ccb72f837 iommu/amd: Fix downgrading default page-sizes in alloc_pte()
Downgrading an existing large mapping to a mapping using smaller
page-sizes works only for the mappings created with page-mode 7 (i.e.
non-default page size).

Treat large mappings created with page-mode 0 (i.e. default page size)
like a non-present mapping and allow to overwrite it in alloc_pte().

While around, make sure that we flush the TLB only if we change an
existing mapping, otherwise we might end up acting on garbage PTEs.

Fixes: 6d568ef9a6 ("iommu/amd: Allow downgrading page-sizes in alloc_pte()")
Signed-off-by: Andrei Dulea <adulea@amazon.de>
2019-09-24 11:15:37 +02:00
Andrei Dulea
34c0989c05 iommu/amd: Fix pages leak in free_pagetable()
Take into account the gathered freelist in free_sub_pt(), otherwise we
end up leaking all that pages.

Fixes: 409afa44f9 ("iommu/amd: Introduce free_sub_pt() function")
Signed-off-by: Andrei Dulea <adulea@amazon.de>
2019-09-24 11:15:09 +02:00
Linus Torvalds
e3a008ac12 Devicetree updates for v5.4:
- A bunch of DT binding conversions to DT schema format
 
 - Clean-ups of the Arm idle-states binding
 
 - Support a default number of cells in of_for_each_phandle() when the
   cells name is missing
 
 - Expose dtbs_check and dt_binding_check in the make help
 
 - Convert writting-schema.md to ReST
 
 - HiSilicon reset controller binding updates
 
 - Add documentation for MT8516 RNG
 -----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCgAuFiEEktVUI4SxYhzZyEuo+vtdtY28YcMFAl2Dj38QHHJvYmhAa2Vy
 bmVsLm9yZwAKCRD6+121jbxhw4qcEACE16/eR0h9FSnhN0QpyFlGrfUTy86K5Z4N
 IoJsGind4G7+TrNA6GGZwQkNRt3roWdrkqnLLvcted+8IVaXOFm0n12w2u0yoYvk
 C4pqxH2HRUC9U9eBjyDxdiplH9yYZPuy8bFwLPSQk0bkCd6D3I8iDe6qHm1arin3
 sYIQ03jbZKowHixOuMNvu9rBiun79Lm5FfGUSi7EYab3KZ4Zt9HX1IiySRYVOWZT
 z6bjWbVfFe7HgbImwaB+WUYumUyNu5dh4AyqIidb9o6BB6ZENfnBNWPi0VDFuSGT
 4wVc8XrcU3d7bt6Sstt+g3WZjn+JBMLNBkNnMjZ+nlp3OoR5F6Tf1RO6mrZtsENa
 sAspr18zNQK7CNBy0uKzBT32Z0oN1wXnsKRS5P1o5/8aEjRr0m8stxes3hOQhtuJ
 Y6rKLN9kGrQIeSY7nagWuGFaJ1uunGXCSgam+kb6YI8nDa3DUbzeIhYMIcqgz/Sx
 Gx2txPzKMHXgzF7Zc+5db9X3E7pg8Y1zrhk7o2oKiFVWrnwlEJivMcRHq9n3anOr
 RGAJPjrRfzwZNIQgYNflYHAdxVLyKKhpxEQDdo/5PXeMRYtghOH+rIxwoS31FHSs
 u/4nf0uHFQfkmSg7nSKicfSWt5ORR5G/H9cc83SRoix35kfPubirkawJ/tkcVuO4
 3n0NeGERtA==
 =ZO6c
 -----END PGP SIGNATURE-----

Merge tag 'devicetree-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux

Pull Devicetree updates from Rob Herring:

 - a bunch of DT binding conversions to DT schema format

 - clean-ups of the Arm idle-states binding

 - support a default number of cells in of_for_each_phandle() when the
   cells name is missing

 - expose dtbs_check and dt_binding_check in the make help

 - convert writting-schema.md to ReST

 - HiSilicon reset controller binding updates

 - add documentation for MT8516 RNG

* tag 'devicetree-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (46 commits)
  of: restore old handling of cells_name=NULL in of_*_phandle_with_args()
  bus: qcom: fix spelling mistake "ambigous" -> "ambiguous"
  of: Let of_for_each_phandle fallback to non-negative cell_count
  iommu: pass cell_count = -1 to of_for_each_phandle with cells_name
  dt-bindings: arm: Convert Realtek board/soc bindings to json-schema
  dt-bindings: arm: Convert Actions Semi bindings to jsonschema
  dt-bindings: Correct spelling in example schema
  dt-bindings: cpu: Add a support cpu type for cortex-a55
  dt-bindings: gpu: mali-midgard: Add samsung exynos5250 compatible
  dt-bindings: arm: idle-states: Move exit-latency-us explanation
  dt-bindings: arm: idle-states: Add punctuation to improve readability
  dt-bindings: arm: idle-states: Correct "constraint guarantees"
  dt-bindings: arm: idle-states: Correct references to wake-up delay
  dt-bindings: arm: idle-states: Use "e.g." and "i.e." consistently
  pinctrl-mcp23s08: Fix property-name in dt-example
  dt-bindings: Clarify interrupts-extended usage
  dt-bindings: Convert Arm Mali Utgard GPU to DT schema
  dt-bindings: Convert Arm Mali Bifrost GPU to DT schema
  dt-bindings: Convert Arm Mali Midgard GPU to DT schema
  dt-bindings: irq: Convert Allwinner NMI Controller to a schema
  ...
2019-09-19 13:48:37 -07:00
Linus Torvalds
671df18953 dma-mapping updates for 5.4:
- add dma-mapping and block layer helpers to take care of IOMMU
    merging for mmc plus subsequent fixups (Yoshihiro Shimoda)
  - rework handling of the pgprot bits for remapping (me)
  - take care of the dma direct infrastructure for swiotlb-xen (me)
  - improve the dma noncoherent remapping infrastructure (me)
  - better defaults for ->mmap, ->get_sgtable and ->get_required_mask (me)
  - cleanup mmaping of coherent DMA allocations (me)
  - various misc cleanups (Andy Shevchenko, me)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl2CSucLHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYPfrhAAgXZA/EdFPvkkCoDrmgtf3XkudX9gajeCd9g4NZy6
 ZBQElTVvm4S0sQj7IXgALnMumDMbbTibW5SQLX5GwQDe+XXBpZ8ajpAnJAXc8a5T
 qaFQ4SInr4CgBZf9nZKDkbSBZ1Tu3AQm1c0QI8riRCkrVTuX4L06xpCef4Yh4mgO
 rwWEjIioYpQiKZMmu98riXh3ZNfFG3mVJRhKt8B6XJbBgnUnjDOPYGgaUwp6CU20
 tFBKL2GaaV0vdLJ5wYhIGXT4DJ8tp9T5n3IYGZv1Ux889RaZEHlCrMxzelYeDbCT
 KhZbhcSECGnddsh73t/UX7/KhytuqnfKa9n+Xo6AWuA47xO4c36quOOcTk9M0vE5
 TfGDmewgL6WIv4lzokpRn5EkfDhyL33j8eYJrJ8e0ldcOhSQIFk4ciXnf2stWi6O
 JrlzzzSid+zXxu48iTfoPdnMr7psTpiMvvRvKfEeMp2FX9Fg6EdMzJYLTEl+COHB
 0WwNacZmY3P01+b5EZXEgqKEZevIIdmPKbyM9rPtTjz8BjBwkABHTpN3fWbVBf7/
 Ax6OPYyW40xp1fnJuzn89m3pdOxn88FpDdOaeLz892Zd+Qpnro1ayulnFspVtqGM
 mGbzA9whILvXNRpWBSQrvr2IjqMRjbBxX3BVACl3MMpOChgkpp5iANNfSDjCftSF
 Zu8=
 =/wGv
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping updates from Christoph Hellwig:

 - add dma-mapping and block layer helpers to take care of IOMMU merging
   for mmc plus subsequent fixups (Yoshihiro Shimoda)

 - rework handling of the pgprot bits for remapping (me)

 - take care of the dma direct infrastructure for swiotlb-xen (me)

 - improve the dma noncoherent remapping infrastructure (me)

 - better defaults for ->mmap, ->get_sgtable and ->get_required_mask
   (me)

 - cleanup mmaping of coherent DMA allocations (me)

 - various misc cleanups (Andy Shevchenko, me)

* tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mapping: (41 commits)
  mmc: renesas_sdhi_internal_dmac: Add MMC_CAP2_MERGE_CAPABLE
  mmc: queue: Fix bigger segments usage
  arm64: use asm-generic/dma-mapping.h
  swiotlb-xen: merge xen_unmap_single into xen_swiotlb_unmap_page
  swiotlb-xen: simplify cache maintainance
  swiotlb-xen: use the same foreign page check everywhere
  swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable
  xen: remove the exports for xen_{create,destroy}_contiguous_region
  xen/arm: remove xen_dma_ops
  xen/arm: simplify dma_cache_maint
  xen/arm: use dev_is_dma_coherent
  xen/arm: consolidate page-coherent.h
  xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance
  arm: remove wrappers for the generic dma remap helpers
  dma-mapping: introduce a dma_common_find_pages helper
  dma-mapping: always use VM_DMA_COHERENT for generic DMA remap
  vmalloc: lift the arm flag for coherent mappings to common code
  dma-mapping: provide a better default ->get_required_mask
  dma-mapping: remove the dma_declare_coherent_memory export
  remoteproc: don't allow modular build
  ...
2019-09-19 13:27:23 -07:00
Linus Torvalds
4feaab05dc LED updates for 5.4-rc1
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQQUwxxKyE5l/npt8ARiEGxRG/Sl2wUCXYAIeQAKCRBiEGxRG/Sl
 2/SzAQDEnoNxzV/R5kWFd+2kmFeY3cll0d99KMrWJ8om+kje6QD/cXxZHzFm+T1L
 UPF66k76oOODV7cyndjXnTnRXbeCRAM=
 =Szby
 -----END PGP SIGNATURE-----

Merge tag 'leds-for-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds

Pull LED updates from Jacek Anaszewski:
 "In this cycle we've finally managed to contribute the patch set
  sorting out LED naming issues. Besides that there are many changes
  scattered among various LED class drivers and triggers.

  LED naming related improvements:

   - add new 'function' and 'color' fwnode properties and deprecate
     'label' property which has been frequently abused for conveying
     vendor specific names that have been available in sysfs anyway

   - introduce a set of standard LED_FUNCTION* definitions

   - introduce a set of standard LED_COLOR_ID* definitions

   - add a new {devm_}led_classdev_register_ext() API with the
     capability of automatic LED name composition basing on the
     properties available in the passed fwnode; the function is
     backwards compatible in a sense that it uses 'label' data, if
     present in the fwnode, for creating LED name

   - add tools/leds/get_led_device_info.sh script for retrieving LED
     vendor, product and bus names, if applicable; it also performs
     basic validation of an LED name

   - update following drivers and their DT bindings to use the new LED
     registration API:

        - leds-an30259a, leds-gpio, leds-as3645a, leds-aat1290, leds-cr0014114,
          leds-lm3601x, leds-lm3692x, leds-lp8860, leds-lt3593, leds-sc27xx-blt

  Other LED class improvements:

   - replace {devm_}led_classdev_register() macros with inlines

   - allow to call led_classdev_unregister() unconditionally

   - switch to use fwnode instead of be stuck with OF one

  LED triggers improvements:

   - led-triggers:
        - fix dereferencing of null pointer
        - fix a memory leak bug

   - ledtrig-gpio:
        - GPIO 0 is valid

  Drop superseeded apu2/3 support from leds-apu since for apu2+ a newer,
  more complete driver exists, based on a generic driver for the AMD
  SOCs gpio-controller, supporting LEDs as well other devices:

   - drop profile field from priv data

   - drop iosize field from priv data

   - drop enum_apu_led_platform_types

   - drop superseeded apu2/3 led support

   - add pr_fmt prefix for better log output

   - fix error message on probing failure

  Other misc fixes and improvements to existing LED class drivers:

   - leds-ns2, leds-max77650:
        - add of_node_put() before return

   - leds-pwm, leds-is31fl32xx:
        - use struct_size() helper

   - leds-lm3697, leds-lm36274, leds-lm3532:
        - switch to use fwnode_property_count_uXX()

   - leds-lm3532:
        - fix brightness control for i2c mode
        - change the define for the fs current register
        - fixes for the driver for stability
        - add full scale current configuration
        - dt: Add property for full scale current.
        - avoid potentially unpaired regulator calls
        - move static keyword to the front of declarations
        - fix optional led-max-microamp prop error handling

   - leds-max77650:
        - add of_node_put() before return
        - add MODULE_ALIAS()
        - Switch to fwnode property API

   - leds-as3645a:
        - fix misuse of strlcpy

   - leds-netxbig:
        - add of_node_put() in netxbig_leds_get_of_pdata()
        - remove legacy board-file support

   - leds-is31fl319x:
        - simplify getting the adapter of a client

   - leds-ti-lmu-common:
        - fix coccinelle issue
        - move static keyword to the front of declaration

   - leds-syscon:
        - use resource managed variant of device register

   - leds-ktd2692:
        - fix a typo in the name of a constant

   - leds-lp5562:
        - allow firmware files up to the maximum length

   - leds-an30259a:
        - fix typo

   - leds-pca953x:
        - include the right header"

* tag 'leds-for-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds: (72 commits)
  leds: lm3532: Fix optional led-max-microamp prop error handling
  led: triggers: Fix dereferencing of null pointer
  leds: ti-lmu-common: Move static keyword to the front of declaration
  leds: lm3532: Move static keyword to the front of declarations
  leds: trigger: gpio: GPIO 0 is valid
  leds: pwm: Use struct_size() helper
  leds: is31fl32xx: Use struct_size() helper
  leds: ti-lmu-common: Fix coccinelle issue in TI LMU
  leds: lm3532: Avoid potentially unpaired regulator calls
  leds: syscon: Use resource managed variant of device register
  leds: Replace {devm_}led_classdev_register() macros with inlines
  leds: Allow to call led_classdev_unregister() unconditionally
  leds: lm3532: Add full scale current configuration
  dt: lm3532: Add property for full scale current.
  leds: lm3532: Fixes for the driver for stability
  leds: lm3532: Change the define for the fs current register
  leds: lm3532: Fix brightness control for i2c mode
  leds: Switch to use fwnode instead of be stuck with OF one
  leds: max77650: Switch to fwnode property API
  led: triggers: Fix a memory leak bug
  ...
2019-09-17 18:40:42 -07:00
Linus Torvalds
76f0f227cf ia64 for v5.4 - big change here is removal of support for SGI Altix
-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJdf64MAAoJEKurIx+X31iBB20P/07o93sBT92SiA2/ety9sLqV
 BGJmEdw7gyb9WVbUip6s71FIEKZw4foCGkqDiX+lr5Fw2A9tiK7LmFgTLi4LLwg+
 syhYZ1y5/mwBI4FLlJudKjQdFZjr/n7DNlz4H67woE2kK+FyRsOKEaFUhuR8+0rC
 mKJBKtIGnoIOPG06PT1k5qfdpzlreCFoWdIhjO55LfDgZnnDiMaX5h0vcBQ9xgCp
 xGV0n/f7+qn4pzB4hGvNV209Sdgv2V4t77bHNvyXlJrM5Hqzafo5MzFgEJv+fRqJ
 2RnkWVhwctfbid/2ggf2aAsYnMK3GigEaOCsYW2oWJESVUQhxIi3ndF/Jt9fraZv
 ZouD7G/s64P5lUQuCT9JnKGzJrSgxvkd37049AZ4pFVc2MzLC6o6dyyP8pu5ARe8
 T0shFik3+gsml2US/vSUzxvrg1saRQjl9E/AJ0RTZ8oyP4FNnFmkJf38qj3a0L0k
 ILFYscM5q7WPggoDA/m6F96tLGhdK/sKjDzrADjEh2dIvn4woqoEJSDn+rXuP+Gm
 UOj1v8mILZCqvOAmc9IkGCkPUlbrmNV/1FYh5+GWudtillEaD82vjSqm+jnVbfXD
 REvHlR/kxCSj1gg/+nk+NFdZCkW3xETOcTZohhDkR7du2mHjTwBMZ2YRPrqoX4c8
 VZA57Mrqm5Uk5601qYRl
 =L5e+
 -----END PGP SIGNATURE-----

Merge tag 'please-pull-ia64_for_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux

Pull ia64 updates from Tony Luck:
 "The big change here is removal of support for SGI Altix"

* tag 'please-pull-ia64_for_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux: (33 commits)
  genirq: remove the is_affinity_mask_valid hook
  ia64: remove CONFIG_SWIOTLB ifdefs
  ia64: remove support for machvecs
  ia64: move the screen_info setup to common code
  ia64: move the ROOT_DEV setup to common code
  ia64: rework iommu probing
  ia64: remove the unused sn_coherency_id symbol
  ia64: remove the SGI UV simulator support
  ia64: remove the zx1 swiotlb machvec
  ia64: remove CONFIG_ACPI ifdefs
  ia64: remove CONFIG_PCI ifdefs
  ia64: remove the hpsim platform
  ia64: remove now unused machvec indirections
  ia64: remove support for the SGI SN2 platform
  drivers: remove the SGI SN2 IOC4 base support
  drivers: remove the SGI SN2 IOC3 base support
  qla2xxx: remove SGI SN2 support
  qla1280: remove SGI SN2 support
  misc/sgi-xp: remove SGI SN2 support
  char/mspec: remove SGI SN2 support
  ...
2019-09-16 15:32:01 -07:00
Uwe Kleine-König
c680e9abaa iommu: pass cell_count = -1 to of_for_each_phandle with cells_name
Currently of_for_each_phandle ignores the cell_count parameter when a
cells_name is given. I intend to change that and let the iterator fall
back to a non-negative cell_count if the cells_name property is missing
in the referenced node.

To not change how existing of_for_each_phandle's users iterate, fix them
to pass cell_count = -1 when also cells_name is given which yields the
expected behaviour with and without my change.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
2019-09-13 16:54:23 -05:00
Joerg Roedel
e95adb9add Merge branches 'arm/omap', 'arm/exynos', 'arm/smmu', 'arm/mediatek', 'arm/qcom', 'arm/renesas', 'x86/amd', 'x86/vt-d' and 'core' into next 2019-09-11 12:39:19 +02:00
Chris Wilson
1f76249cc3 iommu/vt-d: Declare Broadwell igfx dmar support snafu
Despite the widespread and complete failure of Broadwell integrated
graphics when DMAR is enabled, known over the years, we have never been
able to root cause the issue. Instead, we let the failure undermine our
confidence in the iommu system itself when we should be pushing for it to
be always enabled. Quirk away Broadwell and remove the rotten apple.

References: https://bugs.freedesktop.org/show_bug.cgi?id=89360
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Martin Peres <martin.peres@linux.intel.com>
Cc: Joerg Roedel <joro@8bytes.org>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11 12:37:55 +02:00
Kyung Min Park
fd730007a0 iommu/vt-d: Add Scalable Mode fault information
Intel VT-d specification revision 3 added support for Scalable Mode
Translation for DMA remapping. Add the Scalable Mode fault reasons to
show detailed fault reasons when the translation fault happens.

Link: https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdf

Reviewed-by: Sohil Mehta <sohil.mehta@intel.com>
Signed-off-by: Kyung Min Park <kyung.min.park@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11 12:36:53 +02:00
Lu Baolu
cfb94a372f iommu/vt-d: Use bounce buffer for untrusted devices
The Intel VT-d hardware uses paging for DMA remapping.
The minimum mapped window is a page size. The device
drivers may map buffers not filling the whole IOMMU
window. This allows the device to access to possibly
unrelated memory and a malicious device could exploit
this to perform DMA attacks. To address this, the
Intel IOMMU driver will use bounce pages for those
buffers which don't fill whole IOMMU pages.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Xu Pengfei <pengfei.xu@intel.com>
Tested-by: Mika Westerberg <mika.westerberg@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11 12:34:31 +02:00
Lu Baolu
3b53034c26 iommu/vt-d: Add trace events for device dma map/unmap
This adds trace support for the Intel IOMMU driver. It
also declares some events which could be used to trace
the events when an IOVA is being mapped or unmapped in
a domain.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11 12:34:30 +02:00
Lu Baolu
c5a5dc4cbb iommu/vt-d: Don't switch off swiotlb if bounce page is used
The bounce page implementation depends on swiotlb. Hence, don't
switch off swiotlb if the system has untrusted devices or could
potentially be hot-added with any untrusted devices.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11 12:34:30 +02:00
Lu Baolu
e5e04d0519 iommu/vt-d: Check whether device requires bounce buffer
This adds a helper to check whether a device needs to
use bounce buffer. It also provides a boot time option
to disable the bounce buffer. Users can use this to
prevent the iommu driver from using the bounce buffer
for performance gain.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Xu Pengfei <pengfei.xu@intel.com>
Tested-by: Mika Westerberg <mika.westerberg@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11 12:34:29 +02:00
Arnd Bergmann
96088a203a iommu/omap: Mark pm functions __maybe_unused
The runtime_pm functions are unused when CONFIG_PM is disabled:

drivers/iommu/omap-iommu.c:1022:12: error: unused function 'omap_iommu_runtime_suspend' [-Werror,-Wunused-function]
static int omap_iommu_runtime_suspend(struct device *dev)
drivers/iommu/omap-iommu.c:1064:12: error: unused function 'omap_iommu_runtime_resume' [-Werror,-Wunused-function]
static int omap_iommu_runtime_resume(struct device *dev)

Mark them as __maybe_unused to let gcc silently drop them
instead of warning.

Fixes: db8918f61d ("iommu/omap: streamline enable/disable through runtime pm callbacks")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06 17:55:21 +02:00
Joerg Roedel
754265bcab iommu/amd: Fix race in increase_address_space()
After the conversion to lock-less dma-api call the
increase_address_space() function can be called without any
locking. Multiple CPUs could potentially race for increasing
the address space, leading to invalid domain->mode settings
and invalid page-tables. This has been happening in the wild
under high IO load and memory pressure.

Fix the race by locking this operation. The function is
called infrequently so that this does not introduce
a performance regression in the dma-api path again.

Reported-by: Qian Cai <cai@lca.pw>
Fixes: 256e4621c2 ('iommu/amd: Make use of the generic IOVA allocator')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06 10:55:51 +02:00
Stuart Hayes
36b7200f67 iommu/amd: Flush old domains in kdump kernel
When devices are attached to the amd_iommu in a kdump kernel, the old device
table entries (DTEs), which were copied from the crashed kernel, will be
overwritten with a new domain number.  When the new DTE is written, the IOMMU
is told to flush the DTE from its internal cache--but it is not told to flush
the translation cache entries for the old domain number.

Without this patch, AMD systems using the tg3 network driver fail when kdump
tries to save the vmcore to a network system, showing network timeouts and
(sometimes) IOMMU errors in the kernel log.

This patch will flush IOMMU translation cache entries for the old domain when
a DTE gets overwritten with a new domain number.

Signed-off-by: Stuart Hayes <stuart.w.hayes@gmail.com>
Fixes: 3ac3e5ee5e ('iommu/amd: Copy old trans table from old kernel')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06 10:34:30 +02:00
Hai Nguyen Pham
3623002f0f iommu/ipmmu-vmsa: Disable cache snoop transactions on R-Car Gen3
According to the Hardware Manual Errata for Rev. 1.50 of April 10, 2019,
cache snoop transactions for page table walk requests are not supported
on R-Car Gen3.

Hence, this patch removes setting these fields in the IMTTBCR register,
since it will have no effect, and adds comments to the register bit
definitions, to make it clear they apply to R-Car Gen2 only.

Signed-off-by: Hai Nguyen Pham <hai.pham.ud@renesas.com>
[geert: Reword, add comments]
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-05 13:03:34 +02:00
Geert Uytterhoeven
5ca54fdc9b iommu/ipmmu-vmsa: Move IMTTBCR_SL0_TWOBIT_* to restore sort order
Move the recently added IMTTBCR_SL0_TWOBIT_* definitions up, to make
sure all IMTTBCR register bit definitions are sorted by decreasing bit
index.  Add comments to make it clear that they exist on R-Car Gen3
only.

Fixes: c295f504fb ("iommu/ipmmu-vmsa: Allow two bit SL0")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-05 13:03:04 +02:00
Christoph Hellwig
5cf4537975 dma-mapping: introduce a dma_common_find_pages helper
A helper to find the backing page array based on a virtual address.
This also ensures we do the same vm_flags check everywhere instead
of slightly different or missing ones in a few places.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-04 11:13:20 +02:00
Christoph Hellwig
512317401f dma-mapping: always use VM_DMA_COHERENT for generic DMA remap
Currently the generic dma remap allocator gets a vm_flags passed by
the caller that is a little confusing.  We just introduced a generic
vmalloc-level flag to identify the dma coherent allocations, so use
that everywhere and remove the now pointless argument.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-04 11:13:20 +02:00
Christoph Hellwig
f9f3232a7d dma-mapping: explicitly wire up ->mmap and ->get_sgtable
While the default ->mmap and ->get_sgtable implementations work for the
majority of our dma_map_ops impementations they are inherently safe
for others that don't use the page allocator or CMA and/or use their
own way of remapping not covered by the common code.  So remove the
defaults if these methods are not wired up, but instead wire up the
default implementations for all safe instances.

Fixes: e1c7e32453 ("dma-mapping: always provide the dma_map_ops based implementation")
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-04 11:13:18 +02:00
Joerg Roedel
2896ba40d0 iommu: Don't use sme_active() in generic code
Switch to the generic function mem_encrypt_active() because
sme_active() is x86 specific and can't be called from
generic code on other platforms than x86.

Fixes: 2cc13bb4f5 ("iommu: Disable passthrough mode when SME is active")
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-03 15:15:44 +02:00
Jacob Pan
8744daf4b0 iommu/vt-d: Remove global page flush support
Global pages support is removed from VT-d spec 3.0. Since global pages G
flag only affects first-level paging structures and because DMA request
with PASID are only supported by VT-d spec. 3.0 and onward, we can
safely remove global pages support.

For kernel shared virtual address IOTLB invalidation, PASID
granularity and page selective within PASID will be used. There is
no global granularity supported. Without this fix, IOTLB invalidation
will cause invalid descriptor error in the queued invalidation (QI)
interface.

Fixes: 1c4f88b7f1 ("iommu/vt-d: Shared virtual address in scalable mode")
Reported-by: Sanjay K Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-03 15:01:27 +02:00
YueHaibing
097a7df2e3 iommu/arm-smmu-v3: Fix build error without CONFIG_PCI_ATS
If CONFIG_PCI_ATS is not set, building fails:

drivers/iommu/arm-smmu-v3.c: In function arm_smmu_ats_supported:
drivers/iommu/arm-smmu-v3.c:2325:35: error: struct pci_dev has no member named ats_cap; did you mean msi_cap?
  return !pdev->untrusted && pdev->ats_cap;
                                   ^~~~~~~

ats_cap should only used when CONFIG_PCI_ATS is defined,
so use #ifdef block to guard this.

Fixes: bfff88ec1a ("iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-03 14:58:20 +02:00
Yoshihiro Shimoda
158a6d3ce3 iommu/dma: add a new dma_map_ops of get_merge_boundary()
This patch adds a new dma_map_ops of get_merge_boundary() to
expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-03 08:33:06 +02:00
Gustavo A. R. Silva
8758553791 iommu/qcom: Use struct_size() helper
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct qcom_iommu_dev {
	...
        struct qcom_iommu_ctx   *ctxs[0];   /* indexed by asid-1 */
};

Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes.

So, replace the following form:

sizeof(*qcom_iommu) + (max_asid * sizeof(qcom_iommu->ctxs[0]))

with:

struct_size(qcom_iommu, ctxs, max_asid)

Also, notice that, in this case, variable sz is not necessary,
hence it is removed.

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 16:35:52 +02:00
Tom Murphy
d127bc9be8 iommu: Remove wrong default domain comments
These comments are wrong. request_default_domain_for_dev doesn't just
handle direct mapped domains.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 16:33:10 +02:00
Lu Baolu
0ce4a85f4f Revert "iommu/vt-d: Avoid duplicated pci dma alias consideration"
This reverts commit 557529494d.

Commit 557529494d ("iommu/vt-d: Avoid duplicated pci dma alias
consideration") aimed to address a NULL pointer deference issue
happened when a thunderbolt device driver returned unexpectedly.

Unfortunately, this change breaks a previous pci quirk added by
commit cc346a4714 ("PCI: Add function 1 DMA alias quirk for
Marvell devices"), as the result, devices like Marvell 88SE9128
SATA controller doesn't work anymore.

We will continue to try to find the real culprit mentioned in
557529494d, but for now we should revert it to fix current
breakage.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=204627
Cc: Stijn Tintel <stijn@linux-ipv6.be>
Cc: Petr Vandrovec <petr@vandrovec.name>
Reported-by: Stijn Tintel <stijn@linux-ipv6.be>
Reported-by: Petr Vandrovec <petr@vandrovec.name>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 16:26:12 +02:00
Yunsheng Lin
6b0c54e7f2 iommu/dma: Fix for dereferencing before null checking
The cookie is dereferenced before null checking in the function
iommu_dma_init_domain.

This patch moves the dereferencing after the null checking.

Fixes: fdbe574eb6 ("iommu/dma: Allow MSI-only cookies")
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 16:23:19 +02:00
Joerg Roedel
4c00889341 Merge branch 'arm/smmu' into arm/mediatek 2019-08-30 16:12:10 +02:00
Yong Wu
1ee9feb2c9 iommu/mediatek: Clean up struct mtk_smi_iommu
Remove the "struct mtk_smi_iommu" to simplify the code since it has only
one item in it right now.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
ec2da07ca1 memory: mtk-smi: Get rid of need_larbid
The "mediatek,larb-id" has already been parsed in MTK IOMMU driver.
It's no need to parse it again in SMI driver. Only clean some codes.
This patch is fit for all the current mt2701, mt2712, mt7623, mt8173
and mt8183.

After this patch, the "mediatek,larb-id" only be needed for mt2712
which have 2 M4Us. In the other SoCs, we can get the larb-id from M4U
in which the larbs in the "mediatek,larbs" always are ordered.

Correspondingly, the larb_nr in the "struct mtk_smi_iommu" could also
be deleted.

CC: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
b9475b3471 iommu/mediatek: Fix VLD_PA_RNG register backup when suspend
The register VLD_PA_RNG(0x118) was forgot to backup while adding 4GB
mode support for mt2712. this patch add it.

Fixes: 30e2fccf95 ("iommu/mediatek: Enlarge the validate PA range
for 4GB mode")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
15a01f4c60 iommu/mediatek: Add mmu1 support
Normally the M4U HW connect EMI with smi. the diagram is like below:
              EMI
               |
              M4U
               |
            smi-common
               |
       -----------------
       |    |    |     |    ...
    larb0 larb1  larb2 larb3

Actually there are 2 mmu cells in the M4U HW, like this diagram:

              EMI
           ---------
            |     |
           mmu0  mmu1     <- M4U
            |     |
           ---------
               |
            smi-common
               |
       -----------------
       |    |    |     |    ...
    larb0 larb1  larb2 larb3

This patch add support for mmu1. In order to get better performance,
we could adjust some larbs go to mmu1 while the others still go to
mmu0. This is controlled by a SMI COMMON register SMI_BUS_SEL(0x220).

mt2712, mt8173 and mt8183 M4U HW all have 2 mmu cells. the default
value of that register is 0 which means all the larbs go to mmu0
defaultly.

This is a preparing patch for adjusting SMI_BUS_SEL for mt8183.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
907ba6a195 iommu/mediatek: Add mt8183 IOMMU support
The M4U IP blocks in mt8183 is MediaTek's generation2 M4U which use
the ARM Short-descriptor like mt8173, and most of the HW registers
are the same.

Here list main differences between mt8183 and mt8173/mt2712:
1) mt8183 has only one M4U HW like mt8173 while mt2712 has two.
2) mt8183 don't have the "bclk" clock, it use the EMI clock instead.
3) mt8183 can support the dram over 4GB, but it doesn't call this "4GB
mode".
4) mt8183 pgtable base register(0x0) extend bit[1:0] which represent
the bit[33:32] in the physical address of the pgtable base, But the
standard ttbr0[1] means the S bit which is enabled defaultly, Hence,
we add a mask.
5) mt8183 HW has a GALS modules, SMI should enable "has_gals" support.
6) mt8183 need reset_axi like mt8173.
7) the larb-id in smi-common is remapped. M4U should add its larbid_remap.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
2b326d8b1d iommu/mediatek: Move vld_pa_rng into plat_data
Both mt8173 and mt8183 don't have this vld_pa_rng(valid physical address
range) register while mt2712 have. Move it into the plat_data.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
50822b0b94 iommu/mediatek: Move reset_axi into plat_data
In mt8173 and mt8183, 0x48 is REG_MMU_STANDARD_AXI_MODE while it is
REG_MMU_CTRL in the other SoCs, and the bits meaning is completely
different with the REG_MMU_STANDARD_AXI_MODE.

This patch moves this property to plat_data, it's also a preparing
patch for mt8183.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Nicolas Boichat <drinkcat@chromium.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
acb3c92a61 iommu/mediatek: Refine protect memory definition
The protect memory setting is a little different in the different SoCs.
In the register REG_MMU_CTRL_REG(0x110), the TF_PROT(translation fault
protect) shift bit is normally 4 while it shift 5 bits only in the
mt8173. This patch delete the complex MACRO and use a common if-else
instead.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:27 +02:00
Yong Wu
b3e5eee765 iommu/mediatek: Add larb-id remapped support
The larb-id may be remapped in the smi-common, this means the
larb-id reported in the mtk_iommu_isr isn't the real larb-id,

Take mt8183 as a example:
                       M4U
                        |
---------------------------------------------
|               SMI common                  |
-0-----7-----5-----6-----1-----2------3-----4- <- Id remapped
 |     |     |     |     |     |      |     |
larb0 larb1 IPU0  IPU1 larb4 larb5  larb6  CCU
disp  vdec  img   cam   venc  img    cam
As above, larb0 connects with the id 0 in smi-common.
          larb1 connects with the id 7 in smi-common.
          ...
If the larb-id reported in the isr is 7, actually it's larb1(vdec).
In order to output the right larb-id in the isr, we add a larb-id
remapping relationship in this patch.

If there is no this larb-id remapping in some SoCs, use the linear
mapping array instead.

This also is a preparing patch for mt8183.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Nicolas Boichat <drinkcat@chromium.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
2aa4c2597c iommu/mediatek: Add bclk can be supported optionally
In some SoCs, M4U doesn't have its "bclk", it will use the EMI
clock instead which has always been enabled when entering kernel.

Currently mt2712 and mt8173 have this bclk while mt8183 doesn't.

This also is a preparing patch for mt8183.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
b4dad40e4f iommu/mediatek: Adjust the PA for the 4GB Mode
After extending the v7s support PA[33:32] for MediaTek, we have to adjust
the PA ourself for the 4GB mode.

In the 4GB Mode, the PA will remap like this:
CPU PA         ->    M4U output PA
0x4000_0000          0x1_4000_0000 (Add bit32)
0x8000_0000          0x1_8000_0000 ...
0xc000_0000          0x1_c000_0000 ...
0x1_0000_0000        0x1_0000_0000 (No change)

1) Always add bit32 for CPU PA in ->map.
2) Discard the bit32 in iova_to_phys if PA > 0x1_4000_0000 since the
iommu consumer always use the CPU PA.

Besides, the "oas" always is set to 34 since v7s has already supported our
case.

Both mt2712 and mt8173 support this "4GB mode" while the mt8183 don't.
The PA in mt8183 won't remap.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
4c019de653 iommu/io-pgtable-arm-v7s: Extend to support PA[33:32] for MediaTek
MediaTek extend the arm v7s descriptor to support up to 34 bits PA where
the bit32 and bit33 are encoded in the bit9 and bit4 of the PTE
respectively. Meanwhile the iova still is 32bits.

Regarding whether the pagetable address could be over 4GB, the mt8183
support it while the previous mt8173 don't, thus keep it as is.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
73d50811bc iommu/io-pgtable-arm-v7s: Rename the quirk from MTK_4GB to MTK_EXT
In previous mt2712/mt8173, MediaTek extend the v7s to support 4GB dram.
But in the latest mt8183, We extend it to support the PA up to 34bit.
Then the "MTK_4GB" name is not so fit, This patch only change the quirk
name to "MTK_EXT".

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
7f315c9da9 iommu/io-pgtable-arm-v7s: Use ias/oas to check the valid iova/pa
Use ias/oas to check the valid iova/pa. Synchronize this checking with
io-pgtable-arm.c.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
5950b9541b iommu/io-pgtable-arm-v7s: Add paddr_to_iopte and iopte_to_paddr helpers
Add two helper functions: paddr_to_iopte and iopte_to_paddr.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
76ce65464f iommu/mediatek: Fix iova_to_phys PA start for 4GB mode
In M4U 4GB mode, the physical address is remapped as below:

CPU Physical address:

====================

0      1G       2G     3G       4G     5G
|---A---|---B---|---C---|---D---|---E---|
+--I/O--+------------Memory-------------+

IOMMU output physical address:
 =============================

                                4G      5G     6G      7G      8G
                                |---E---|---B---|---C---|---D---|
                                +------------Memory-------------+

The Region 'A'(I/O) can not be mapped by M4U; For Region 'B'/'C'/'D', the
bit32 of the CPU physical address always is needed to set, and for Region
'E', the CPU physical address keep as is. something looks like this:
CPU PA         ->    M4U OUTPUT PA
0x4000_0000          0x1_4000_0000 (Add bit32)
0x8000_0000          0x1_8000_0000 ...
0xc000_0000          0x1_c000_0000 ...
0x1_0000_0000        0x1_0000_0000 (No change)

Additionally, the iommu consumers always use the CPU phyiscal address.

The PA in the iova_to_phys that is got from v7s always is u32, But
from the CPU point of view, PA only need add BIT(32) when PA < 0x4000_0000.

Fixes: 30e2fccf95 ("iommu/mediatek: Enlarge the validate PA range
for 4GB mode")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Yong Wu
cecdce9d7e iommu/mediatek: Use a struct as the platform data
Use a struct as the platform special data instead of the enumeration.
This is a prepare patch for adding mt8183 iommu support.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Reviewed-by: Evan Green <evgreen@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:57:26 +02:00
Eric Auger
4dbd258ff6 iommu: Revisit iommu_insert_resv_region() implementation
Current implementation is recursive and in case of allocation
failure the existing @regions list is altered. A non recursive
version looks better for maintainability and simplifies the
error handling. We use a separate stack for overlapping segment
merging. The elements are sorted by start address and then by
type, if their start address match.

Note this new implementation may change the region order of
appearance in /sys/kernel/iommu_groups/<n>/reserved_regions
files but this order has never been documented, see
commit bc7d12b91b ("iommu: Implement reserved_regions
iommu-group sysfs file").

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:49:55 +02:00
Nadav Amit
2c70010867 iommu/vt-d: Fix wrong analysis whether devices share the same bus
set_msi_sid_cb() is used to determine whether device aliases share the
same bus, but it can provide false indications that aliases use the same
bus when in fact they do not. The reason is that set_msi_sid_cb()
assumes that pdev is fixed, while actually pci_for_each_dma_alias() can
call fn() when pdev is set to a subordinate device.

As a result, running an VM on ESX with VT-d emulation enabled can
results in the log warning such as:

  DMAR: [INTR-REMAP] Request device [00:11.0] fault index 3b [fault reason 38] Blocked an interrupt request due to source-id verification failure

This seems to cause additional ata errors such as:
  ata3.00: qc timeout (cmd 0xa1)
  ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)

These timeouts also cause boot to be much longer and other errors.

Fix it by checking comparing the alias with the previous one instead.

Fixes: 3f0c625c6a ("iommu/vt-d: Allow interrupts from the entire bus for aliased devices")
Cc: stable@vger.kernel.org
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Nadav Amit <namit@vmware.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:47:37 +02:00
Eric Dumazet
0d87308cca iommu/iova: Avoid false sharing on fq_timer_on
In commit 14bd9a607f ("iommu/iova: Separate atomic variables
to improve performance") Jinyu Qi identified that the atomic_cmpxchg()
in queue_iova() was causing a performance loss and moved critical fields
so that the false sharing would not impact them.

However, avoiding the false sharing in the first place seems easy.
We should attempt the atomic_cmpxchg() no more than 100 times
per second. Adding an atomic_read() will keep the cache
line mostly shared.

This false sharing came with commit 9a005a800a
("iommu/iova: Add flush timer").

Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: 9a005a800a ('iommu/iova: Add flush timer')
Cc: Jinyu Qi <jinyuqi@huawei.com>
Cc: Joerg Roedel <jroedel@suse.de>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 15:21:53 +02:00
Qian Cai
3d70889532 iommu/amd: Silence warnings under memory pressure
When running heavy memory pressure workloads, the system is throwing
endless warnings,

smartpqi 0000:23:00.0: AMD-Vi: IOMMU mapping error in map_sg (io-pages:
5 reason: -12)
Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40
07/10/2019
swapper/10: page allocation failure: order:0, mode:0xa20(GFP_ATOMIC),
nodemask=(null),cpuset=/,mems_allowed=0,4
Call Trace:
 <IRQ>
 dump_stack+0x62/0x9a
 warn_alloc.cold.43+0x8a/0x148
 __alloc_pages_nodemask+0x1a5c/0x1bb0
 get_zeroed_page+0x16/0x20
 iommu_map_page+0x477/0x540
 map_sg+0x1ce/0x2f0
 scsi_dma_map+0xc6/0x160
 pqi_raid_submit_scsi_cmd_with_io_request+0x1c3/0x470 [smartpqi]
 do_IRQ+0x81/0x170
 common_interrupt+0xf/0xf
 </IRQ>

because the allocation could fail from iommu_map_page(), and the volume
of this call could be huge which may generate a lot of serial console
output and cosumes all CPUs.

Fix it by silencing the warning in this call site, and there is still a
dev_err() later to notify the failure.

Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30 12:50:57 +02:00
Joerg Roedel
dbe8e6a81a Merge branch 'for-joerg/arm-smmu/updates' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu 2019-08-23 17:41:59 +02:00
Will Deacon
1554240ff8 Merge branches 'for-joerg/arm-smmu/smmu-v2' and 'for-joerg/arm-smmu/smmu-v3' into for-joerg/arm-smmu/updates
* for-joerg/arm-smmu/smmu-v2:
  Refactoring to allow for implementation-specific hooks in 'arm-smmu-impl.c'

* for-joerg/arm-smmu/smmu-v3:
  Support for deferred TLB invalidation and batching of commands
  Rework ATC invalidation for ATS-enabled PCIe masters
2019-08-23 15:05:45 +01:00
Kai-Heng Feng
93d051550e iommu/amd: Override wrong IVRS IOAPIC on Raven Ridge systems
Raven Ridge systems may have malfunction touchpad or hang at boot if
incorrect IVRS IOAPIC is provided by BIOS.

Users already found correct "ivrs_ioapic=" values, let's put them inside
kernel to workaround buggy BIOS.

BugLink: https://bugs.launchpad.net/bugs/1795292
BugLink: https://bugs.launchpad.net/bugs/1837688
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:26:48 +02:00
Joerg Roedel
2cc13bb4f5 iommu: Disable passthrough mode when SME is active
Using Passthrough mode when SME is active causes certain
devices to use the SWIOTLB bounce buffer. The bounce buffer
code has an upper limit of 256kb for the size of DMA
allocations, which is too small for certain devices and
causes them to fail.

With this patch we enable IOMMU by default when SME is
active in the system, making the default configuration work
for more systems than it does now.

Users that don't want IOMMUs to be enabled still can disable
them with kernel parameters.

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:11:29 +02:00
Joerg Roedel
22bb182c83 iommu: Set default domain type at runtime
Set the default domain-type at runtime, not at compile-time.
This keeps default domain type setting in one place when we
have to change it at runtime.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:11:28 +02:00
Joerg Roedel
5fa9e7c5fa iommu: Print default domain type on boot
Introduce a subsys_initcall for IOMMU code and use it to
print the default domain type at boot.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:11:28 +02:00
Joerg Roedel
6b9a7d3a46 iommu/vt-d: Request passthrough mode from IOMMU core
Get rid of the iommu_pass_through variable and request
passthrough mode via the new iommu core function.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:09:58 +02:00
Joerg Roedel
cc7c8ad973 iommu/amd: Request passthrough mode from IOMMU core
Get rid of the iommu_pass_through variable and request
passthrough mode via the new iommu core function.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:09:58 +02:00
Joerg Roedel
adab0b07cb iommu: Use Functions to set default domain type in iommu_set_def_domain_type()
There are functions now to set the default domain type which
take care of updating other necessary state. Don't open-code
it in iommu_set_def_domain_type() and use those functions
instead.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:09:58 +02:00
Joerg Roedel
8a69961c7f iommu: Add helpers to set/get default domain type
Add a couple of functions to allow changing the default
domain type from architecture code and a function for iommu
drivers to request whether the default domain is
passthrough.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:09:58 +02:00
Joerg Roedel
faf1498993 iommu: Remember when default domain type was set on kernel command line
Introduce an extensible concept to remember when certain
configuration settings for the IOMMU code have been set on
the kernel command line.

This will be used later to prevent overwriting these
settings with other defaults.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23 10:09:58 +02:00
Will Deacon
a91bcc2b65 Revert "iommu/arm-smmu-v3: Disable detection of ATS and PRI"
This reverts commit b5e86196b8.

Now that ATC invalidation is performed in the correct places and without
incurring a locking overhead for non-ATS systems, we can re-enable the
corresponding SMMU feature detection.

Signed-off-by: Will Deacon <will@kernel.org>
2019-08-22 18:16:19 +01:00
Will Deacon
cdb8a3c346 iommu/arm-smmu-v3: Avoid locking on invalidation path when not using ATS
When ATS is not in use, we can avoid taking the 'devices_lock' for the
domain on the invalidation path by simply caching the number of ATS
masters currently attached. The fiddly part is handling a concurrent
->attach() of an ATS-enabled master to a domain that is being
invalidated, but we can handle this using an 'smp_mb()' to ensure that
our check of the count is ordered after completion of our prior TLB
invalidation.

This also makes our ->attach() and ->detach() flows symmetric wrt ATS
interactions.

Acked-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-22 18:16:11 +01:00
Will Deacon
353e3cf859 iommu/arm-smmu-v3: Fix ATC invalidation ordering wrt main TLBs
When invalidating the ATC for an PCIe endpoint using ATS, we must take
care to complete invalidation of the main SMMU TLBs beforehand, otherwise
the device could immediately repopulate its ATC with stale translations.

Hooking the ATC invalidation into ->unmap() as we currently do does the
exact opposite: it ensures that the ATC is invalidated *before*  the
main TLBs, which is bogus.

Move ATC invalidation into the actual (leaf) invalidation routines so
that it is always called after completing main TLB invalidation.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21 17:58:54 +01:00
Will Deacon
bfff88ec1a iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters
To prevent any potential issues arising from speculative Address
Translation Requests from an ATS-enabled PCIe endpoint, rework our ATS
enabling/disabling logic so that we enable ATS at the SMMU before we
enable it at the endpoint, and disable things in the opposite order.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21 17:58:42 +01:00
Will Deacon
7314ca8699 iommu/arm-smmu-v3: Don't issue CMD_SYNC for zero-length invalidations
Calling arm_smmu_tlb_inv_range() with a size of zero, perhaps due to
an empty 'iommu_iotlb_gather' structure, should be a NOP. Elide the
CMD_SYNC when there is no invalidation to be performed.

Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21 17:58:41 +01:00
Will Deacon
f75d8e33df iommu/arm-smmu-v3: Remove boolean bitfield for 'ats_enabled' flag
There's really no need for this to be a bitfield, particularly as we
don't have bitwise addressing on arm64.

Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21 17:58:40 +01:00
Will Deacon
b5e86196b8 iommu/arm-smmu-v3: Disable detection of ATS and PRI
Detecting the ATS capability of the SMMU at probe time introduces a
spinlock into the ->unmap() fast path, even when ATS is not actually
in use. Furthermore, the ATC invalidation that exists is broken, as it
occurs before invalidation of the main SMMU TLB which leaves a window
where the ATC can be repopulated with stale entries.

Given that ATS is both a new feature and a specialist sport, disable it
for now whilst we fix it properly in subsequent patches. Since PRI
requires ATS, disable that too.

Cc: <stable@vger.kernel.org>
Fixes: 9ce27afc08 ("iommu/arm-smmu-v3: Add support for PCI ATS")
Acked-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21 17:58:12 +01:00
Will Deacon
05cbaf4ddd iommu/arm-smmu-v3: Document ordering guarantees of command insertion
It turns out that we've always relied on some subtle ordering guarantees
when inserting commands into the SMMUv3 command queue. With the recent
changes to elide locking when possible, these guarantees become more
subtle and even more important.

Add a comment documented the barrier semantics of command insertion so
that we don't have to derive the behaviour from scratch each time it
comes up on the list.

Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21 15:01:53 +01:00
Christoph Hellwig
90ae409f9e dma-direct: fix zone selection after an unaddressable CMA allocation
The new dma_alloc_contiguous hides if we allocate CMA or regular
pages, and thus fails to retry a ZONE_NORMAL allocation if the CMA
allocation succeeds but isn't addressable.  That means we either fail
outright or dip into a small zone that might not succeed either.

Thanks to Hillf Danton for debugging this issue.

Fixes: b1d2dc009d ("dma-contiguous: add dma_{alloc,free}_contiguous() helpers")
Reported-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
2019-08-21 07:14:10 +09:00
Robin Murphy
d720e64150 iommu/arm-smmu: Ensure 64-bit I/O accessors are available on 32-bit CPU
As part of the grand SMMU driver refactoring effort, the I/O register
accessors were moved into 'arm-smmu.h' in commit 6d7dff62af
("iommu/arm-smmu: Move Secure access quirk to implementation").

On 32-bit architectures (such as ARM), the 64-bit accessors are defined
in 'linux/io-64-nonatomic-hi-lo.h', so include this header to fix the
build.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-20 12:48:37 +01:00
Will Deacon
4b67f1ddcf iommu/arm-smmu: Make private implementation details static
Many of the device-specific implementation details in 'arm-smmu-impl.c'
are exposed to other compilation units. Whilst we may require this in
the future, let's make it all 'static' for now so that we can expose
things on a case-by-case basic.

Signed-off-by: Will Deacon <will@kernel.org>
2019-08-20 10:58:03 +01:00
Joerg Roedel
fe427e373d Merge branch 'for-joerg/batched-unmap' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into core 2019-08-20 11:09:43 +02:00
Robin Murphy
ba7e4a08bb iommu/arm-smmu: Add context init implementation hook
Allocating and initialising a context for a domain is another point
where certain implementations are known to want special behaviour.
Currently the other half of the Cavium workaround comes into play here,
so let's finish the job to get the whole thing right out of the way.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:48 +01:00
Robin Murphy
62b993a36e iommu/arm-smmu: Add reset implementation hook
Reset is an activity rife with implementation-defined poking. Add a
corresponding hook, and use it to encapsulate the existing MMU-500
details.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:48 +01:00
Robin Murphy
3995e18689 iommu/arm-smmu: Add configuration implementation hook
Probing the ID registers and setting up the SMMU configuration is an
area where overrides and workarounds may well be needed. Indeed, the
Cavium workaround detection lives there at the moment, so let's break
that out.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:48 +01:00
Robin Murphy
6d7dff62af iommu/arm-smmu: Move Secure access quirk to implementation
Move detection of the Secure access quirk to its new home, trimming it
down in the process - time has proven that boolean DT flags are neither
ideal nor necessarily sufficient, so it's highly unlikely we'll ever add
more, let alone enough to justify the frankly overengineered parsing
machinery.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:48 +01:00
Robin Murphy
fc058d37b3 iommu/arm-smmu: Add implementation infrastructure
Add some nascent infrastructure for handling implementation-specific
details outside the flow of the architectural code. This will allow us
to keep mutually-incompatible vendor-specific hooks in their own files
where the respective interested parties can maintain them with minimal
chance of conflicts. As somewhat of a template, we'll start with a
general place to collect the relatively trivial existing quirks.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
c5fc64881f iommu/arm-smmu: Rename arm-smmu-regs.h
We're about to start using it for more than just register definitions,
so generalise the name.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
00320ce650 iommu/arm-smmu: Abstract GR0 accesses
Clean up the remaining accesses to GR0 registers, so that everything is
now neatly abstracted. This folds up the Non-Secure alias quirk as the
first step towards moving it out of the way entirely. Although GR0 does
technically contain some 64-bit registers (sGFAR and the weird SMMUv2
HYPC and MONC stuff), they're not ones we have any need to access.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
19713fd40d iommu/arm-smmu: Abstract context bank accesses
Context bank accesses are fiddly enough to deserve a number of extra
helpers to keep the callsites looking sane, even though there are only
one or two of each.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
aadbf2143a iommu/arm-smmu: Abstract GR1 accesses
Introduce some register access abstractions which we will later use to
encapsulate various quirks. GR1 is the easiest page to start with.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
6100576284 iommu/arm-smmu: Get rid of weird "atomic" write
The smmu_write_atomic_lq oddity made some sense when the context
format was effectively tied to CONFIG_64BIT, but these days it's
simpler to just pick an explicit access size based on the format
for the one-and-a-half times we actually care.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
71e8a8cdaf iommu/arm-smmu: Split arm_smmu_tlb_inv_range_nosync()
Since we now use separate iommu_gather_ops for stage 1 and stage 2
contexts, we may as well divide up the monolithic callback into its
respective stage 1 and stage 2 parts.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
490325e0c1 iommu/arm-smmu: Rework cb_base handling
To keep register-access quirks manageable, we want to structure things
to avoid needing too many individual overrides. It seems fairly clean to
have a single interface which handles both global and context registers
in terms of the architectural pages, so the first preparatory step is to
rework cb_base into a page number rather than an absolute address.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
620565a76b iommu/arm-smmu: Convert context bank registers to bitfields
Finish the final part of the job, once again updating some names to
match the current spec.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
5114e96cb2 iommu/arm-smmu: Convert GR1 registers to bitfields
As for GR0, use the bitfield helpers to make GR1 usage a little cleaner,
and use it as an opportunity to audit and tidy the definitions. This
tweaks the handling of CBAR types to match what we did for S2CR a while
back, and fixes a couple of names which didn't quite match the latest
architecture spec (IHI0062D.c).

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
0caf5f4e84 iommu/arm-smmu: Convert GR0 registers to bitfields
FIELD_PREP remains a terrible name, but the overall simplification will
make further work on this stuff that much more manageable. This also
serves as an audit of the header, wherein we can impose a consistent
grouping and ordering of the offset and field definitions

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
a5b396ce50 iommu/qcom: Mask TLBI addresses correctly
As with arm-smmu from whence this code was borrowed, the IOVAs passed in
here happen to be at least page-aligned anyway, but still; oh dear.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Robin Murphy
353b325047 iommu/arm-smmu: Mask TLBI address correctly
The less said about "~12UL" the better. Oh dear.

We get away with it due to calling constraints that mean IOVAs are
implicitly at least page-aligned to begin with, but still; oh dear.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19 16:52:47 +01:00
Suman Anna
24ce0bab26 iommu/omap: Use the correct type for SLAB_HWCACHE_ALIGN
The macro SLAB_HWCACHE_ALIGN is of type slab_flags_t, but is currently
assigned in the OMAP IOMMU driver using a unsigned long variable. This
generates a sparse warning around the type check. Fix this by defining
the variable flags using the correct type.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-19 15:07:22 +02:00
Christoph Hellwig
df41017eaf ia64: remove support for machvecs
The only thing remaining of the machvecs is a few checks if we are
running on an SGI UV system.  Replace those with the existing
is_uv_system() check that has been rewritten to simply check the
OEM ID directly.

That leaves us with a generic kernel that is as fast as the previous
DIG/ZX1/UV kernels, but can support all hardware.  Support for UV
and the HP SBA IOMMU is now optional based on new config options.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lkml.kernel.org/r/20190813072514.23299-27-hch@lst.de
Signed-off-by: Tony Luck <tony.luck@intel.com>
2019-08-16 14:32:26 -07:00
Linus Torvalds
e83b009c5c dma-mapping fixes for 5.3-rc
- fix the handling of the bus_dma_mask in dma_get_required_mask, which
    caused a regression in this merge window (Lucas Stach)
  - fix a regression in the handling of DMA_ATTR_NO_KERNEL_MAPPING (me)
  - fix dma_mmap_coherent to not cause page attribute mismatches on
    coherent architectures like x86 (me)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl1UFhILHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYOjexAAjPKLo4WGBGO1nd0btwXcI9A7jQTQlXrokmorDVzx
 5++GmTUBeEgvUJath5D3qpQTRZXo9Wb9oGMdS5U6bWJB+SbWtErM304t905TJoDM
 Cs7xcB1ZQeG/5OrQ+qGPgQCo6WO1dOl9FpaIptjNm4dn+OYhyO/YA+dgrJDwgkiA
 140RYUWa+Zhq3df4YqP4M4EnezLN1c4uE80wUxVQKDcq59sxCJek0QT0pUAMbdmQ
 /cUd2XSU113o1llmIRUh0Oj6VSEhWKHb+bdb8JfGndLzxvDcXZKl60tikWe6xpy2
 Ue0kkHRk6OPVRIxWkRjt8D+mlrCyNqN6HWx6eBmVnRKHxZ4ia2hYOFuYN9FFLLK+
 kCUlu5P/HUabBedKIxk4rbWITUqcRSviPD2WdnH2RWblvXNSDoSAufYuJ/9IGSoL
 P6a43DVKFesVF/MxeH9Ko8bnxMUO9Zn97GHcQIUplRwaqrnrCEPlvLVf/teswSQG
 C13rTnouZ0FA4z/uV96G6HfGIj87MLe/RovmLCMTeiSKrDpbcO7szP037Km73M+V
 UBmatoYCioVLxBjw3NkxCRc9UpDPdRUu31uVHrAarh4tutUASEWLrb6s9vFlGyED
 zis9IHWtIAYP3VfFtkXdZ7oDlqC/3KdEErHZuT+z4PK3Wj/QtQVfQ8SB79xFMneD
 V2E=
 =Jzmo
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping fixes from Christoph Hellwig:

 - fix the handling of the bus_dma_mask in dma_get_required_mask, which
   caused a regression in this merge window (Lucas Stach)

 - fix a regression in the handling of DMA_ATTR_NO_KERNEL_MAPPING (me)

 - fix dma_mmap_coherent to not cause page attribute mismatches on
   coherent architectures like x86 (me)

* tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mapping:
  dma-mapping: fix page attributes for dma_mmap_*
  dma-direct: don't truncate dma_required_mask to bus addressing capabilities
  dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
2019-08-14 10:31:11 -07:00
Marek Szyprowski
7991eb39ee iommu/exynos: Remove __init annotation from exynos_sysmmu_probe()
Exynos SYSMMU driver supports deferred probe. It happens when clocks
needed for this driver are not yet available. Typically next calls to
driver ->probe() happen before init section is free, but this is not
really guaranteed. To make if safe, remove __init annotation from
exynos_sysmmu_probe() function.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-14 10:50:50 +02:00
Christoph Hellwig
33dcb37cef dma-mapping: fix page attributes for dma_mmap_*
All the way back to introducing dma_common_mmap we've defaulted to mark
the pages as uncached.  But this is wrong for DMA coherent devices.
Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
flag is only treated special on the alloc side for non-coherent devices.

Introduce a new dma_pgprot helper that deals with the check for coherent
devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
and we thus ensure no aliasing of page attributes happens, which makes
the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
remaining ones.

Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
we'll phase it out soon.

Fixes: 64ccc9c033 ("common: dma-mapping: add support for generic dma_mmap_* calls")
Reported-by: Shawn Anastasio <shawn@anastas.io>
Reported-by: Gavin Li <git@thegavinli.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
2019-08-10 19:52:45 +02:00
Tero Kristo
1432ebbd60 iommu/omap: remove pm_runtime_irq_safe flag for OMAP IOMMUs
This is not needed for anything, and prevents proper PM transitions for
parent devices which is bad in case of ti-sysc; this effectively kills
PM completely. Thus, remove the flag.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:10 +02:00
Tero Kristo
604629bcb5 iommu/omap: add support for late attachment of iommu devices
Current implementation of OMAP IOMMU enforces strict ordering of device
probe, initiated by iommu and followed by remoteproc later. This doesn't
work too well with the new setup done with ti-sysc changes which may
have the devices probed at pretty much any order. To overcome this limitation,
if iommu has not been probed yet when a consumer tries to attach to it,
add the device to orphan device list which will be parsed during iommu
probe to see if any orphan devices should be attached.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:10 +02:00
Suman Anna
d9c4d8a6cc iommu/omap: introduce new API for runtime suspend/resume control
This patch adds the support for the OMAP IOMMUs to be suspended
during the auto suspend/resume of the OMAP remoteproc devices. The
remote processors are auto suspended after a certain time of idle
or inactivity period. This is done by introducing two new API,
omap_iommu_domain_deactivate() and omap_iommu_domain_activate()
to allow the client users/master devices of the IOMMU devices to
deactivate & activate the IOMMU devices from their runtime
suspend/resume operations. There is no API exposed by the IOMMU
layer at present, and so these new API are added directly in the
OMAP IOMMU driver to minimize framework changes.

The API simply decrements and increments the runtime usage count
of the IOMMU devices and let the context be saved/restored using
the existing runtime pm callbacks.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:10 +02:00
Suman Anna
c4206c4e19 iommu/omap: Add system suspend/resume support
The MMU registers for the remote processors lose their context
in Open Switch Retention (OSWR) or device OFF modes. Hence, the
context of the IOMMU needs to be saved before it is put into any
of these lower power state (OSWR/OFF) and restored before it is
powered up to ON again. The IOMMUs need to be active as long as
the client devices that are present behind the IOMMU are active.

This patch adds the dev_pm_ops callbacks to provide the system
suspend/resume functionality through the appropriate runtime
PM callbacks. The PM runtime_resume and runtime_suspend callbacks
are already used to enable, configure and disable the IOMMUs during
the attaching and detaching of the client devices to the IOMMUs,
and the new PM callbacks reuse the same code by invoking the
pm_runtime_force_suspend() and pm_runtime_force_resume() API. The
functionality in dev_pm_ops .prepare() checks if the IOMMU device
was already runtime suspended, and skips invoking the suspend/resume
PM callbacks. The suspend/resume PM callbacks are plugged in through
the 'late' pm ops to ensure that the IOMMU devices will be suspended
only after its master devices (remoteproc devices) are suspended and
restored before them.

NOTE:
There are two other existing API, omap_iommu_save_ctx() and
omap_iommu_restore_ctx(). These are left as is to support
suspend/resume of devices on legacy OMAP3 SoC.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:10 +02:00
Suman Anna
c3b44a063d iommu/omap: add logic to save/restore locked TLBs
The MMUs provide a mechanism to lock TLB entries to avoid
eviction and fetching of frequently used page table entries.
These TLBs lose context when the MMUs are turned OFF. Add the
logic to save and restore these locked TLBS during suspend
and resume respectively. There are no locked TLBs during
initial power ON, and they need not be saved during final
shutdown.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:09 +02:00
Suman Anna
db8918f61d iommu/omap: streamline enable/disable through runtime pm callbacks
The OMAP IOMMU devices are typically present within the respective
client processor subsystem and have their own dedicated hard-reset
line. Enabling an IOMMU requires the reset line to be deasserted
and the clocks to be enabled before programming the necessary IOMMU
registers. The IOMMU disable sequence follow the reverse order of
enabling. The OMAP IOMMU driver programs the reset lines through
pdata ops to invoke the omap_device_assert/deassert_hardreset API.
The clocks are managed through the pm_runtime framework, and the
callbacks associated with the device's pm_domain, implemented in
the omap_device layer.

Streamline the enable and disable sequences in the OMAP IOMMU
driver by implementing all the above operations within the
runtime pm callbacks. All the OMAP devices have device pm_domain
callbacks plugged in the omap_device layer for automatic runtime
management of the clocks. Invoking the reset management functions
within the runtime pm callbacks in OMAP IOMMU driver therefore
requires that the default device's pm domain callbacks in the
omap_device layer be reset, as the ordering sequence for managing
the reset lines and clocks from the pm_domain callbacks don't gel
well with the implementation in the IOMMU driver callbacks. The
omap_device_enable/omap_device_idle functions are invoked through
the newly added pdata ops.

Consolidating all the device management sequences within the
runtime pm callbacks allows the driver to easily support both
system suspend/resume and runtime suspend/resume using common
code.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:09 +02:00
Suman Anna
3846a3b951 iommu/omap: fix boot issue on remoteprocs with AMMU/Unicache
Support has been added to the OMAP IOMMU driver to fix a boot hang
issue on OMAP remoteprocs with AMMU/Unicache, caused by an improper
AMMU/Unicache state upon initial deassertion of the processor reset.
The issue is described in detail in the next three paragraphs.

All the Cortex M3/M4 IPU processor subsystems in OMAP SoCs have a
AMMU/Unicache IP that dictates the memory attributes for addresses
seen by the processor cores. The AMMU/Unicache is configured/enabled
by the SCACHE_CONFIG.BYPASS bit - a value of 1 enables the cache and
mandates all addresses accessed by M3/M4 be defined in the AMMU. This
bit is not programmable from the host processor. The M3/M4 boot
sequence starts out with the AMMU/Unicache in disabled state, and
SYS/BIOS programs the AMMU regions and enables the Unicache during
one of its initial boot steps. This SCACHE_CONFIG.BYPASS bit is
however enabled by default whenever a RET reset is applied to the IP,
irrespective of whether it was previously enabled or not. The AMMU
registers lose their context whenever this reset is applied. The reset
is effective as long as the MMU portion of the subsystem is enabled
and clocked. This behavior is common to all the IPU and DSP subsystems
that have an AMMU/Unicache.

The IPU boot sequence involves enabling and programming the MMU, and
loading the processor and releasing the reset(s) for the processor.
The PM setup code currently sets the target state for most of the
power domains to RET. The L2 MMU can be enabled, programmed and
accessed properly just fine with the domain in hardware supervised
mode, while the power domain goes through a RET->ON->RET transition
during the programming sequence. However, the ON->RET transition
asserts a RET reset, and the SCACHE_CONFIG.BYPASS bit gets auto-set.
An AMMU fault is thrown immediately when the M3/M4 core's reset is
released since the first instruction address itself will not be
defined in any valid AMMU regions. The ON->RET transition happens
automatically on the power domain after enabling the iommu due to
the hardware supervised mode.

This patch adds and invokes the .set_pwrdm_constraint pdata ops, if
present, during the OMAP IOMMU enable and disable functions to resolve
the above boot hang issue. The ops will allow to invoke a mach-omap2
layer API pwrdm_set_next_pwrst() in a multi-arch kernel environment.
The ops also returns the current power domain state while enforcing
the constraint so that the driver can store it and use it to set back
the power domain state while releasing the constraint. The pdata ops
implementation restricts the target power domain to ON during enable,
and back to the original power domain state during disable, and thereby
eliminating the conditions for the boot issue. The implementation is
effective only when the original power domain state is either RET or
OFF, and is a no-op when it is ON or INACTIVE.

The .set_pwrdm_constraint ops need to be plugged in pdata-quirks
for the affected remote processors to be able to boot properly.

Note that the current issue is seen only on kernels with the affected
power domains programmed to enter RET. For eg., IPU1 on DRA7xx is in a
separate domain and is susceptible to this bug, while the IPU2 subsystem
is within CORE power domain, and CORE RET is not supported on this SoC.
IPUs on OMAP4 and OMAP5 are also susceptible since they are in CORE power
domain, and CORE RET is a valid power target on these SoCs.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:37:09 +02:00
Lu Baolu
3a18844dcf iommu/vt-d: Fix possible use-after-free of private domain
Multiple devices might share a private domain. One real example
is a pci bridge and all devices behind it. When remove a private
domain, make sure that it has been detached from all devices to
avoid use-after-free case.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Fixes: 942067f1b6 ("iommu/vt-d: Identify default domains replaced with private")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:35:25 +02:00
Lu Baolu
ae23bfb68f iommu/vt-d: Detach domain before using a private one
When the default domain of a group doesn't work for a device,
the iommu driver will try to use a private domain. The domain
which was previously attached to the device must be detached.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Fixes: 942067f1b6 ("iommu/vt-d: Identify default domains replaced with private")
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lkml.org/lkml/2019/8/2/1379
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:35:25 +02:00
Stephen Boyd
086f9efae7 iommu: Remove dev_err() usage after platform_get_irq()
We don't need dev_err() messages when platform_get_irq() fails now that
platform_get_irq() prints an error message itself when something goes
wrong. Let's remove these prints with a simple semantic patch.

// <smpl>
@@
expression ret;
struct platform_device *E;
@@

ret =
(
platform_get_irq(E, ...)
|
platform_get_irq_byname(E, ...)
);

if ( \( ret < 0 \| ret <= 0 \) )
{
(
-if (ret != -EPROBE_DEFER)
-{ ...
-dev_err(...);
-... }
|
...
-dev_err(...);
)
...
}
// </smpl>

While we're here, remove braces on if statements that only have one
statement (manually).

Cc: Joerg Roedel <joro@8bytes.org>
Cc: iommu@lists.linux-foundation.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:33:19 +02:00
Robin Murphy
ab2cbeb0ed iommu/dma: Handle SG length overflow better
Since scatterlist dimensions are all unsigned ints, in the relatively
rare cases where a device's max_segment_size is set to UINT_MAX, then
the "cur_len + s_length <= max_len" check in __finalise_sg() will always
return true. As a result, the corner case of such a device mapping an
excessively large scatterlist which is mergeable to or beyond a total
length of 4GB can lead to overflow and a bogus truncated dma_length in
the resulting segment.

As we already assume that any single segment must be no longer than
max_len to begin with, this can easily be addressed by reshuffling the
comparison.

Fixes: 809eac54cd ("iommu/dma: Implement scatterlist segment merging")
Reported-by: Nicolin Chen <nicoleotsuka@gmail.com>
Tested-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:31:39 +02:00
Suthikulpanit, Suravee
b9c6ff94e4 iommu/amd: Re-factor guest virtual APIC (de-)activation code
Re-factore the logic for activate/deactivate guest virtual APIC mode (GAM)
into helper functions, and export them for other drivers (e.g. SVM).
to support run-time activate/deactivate of SVM AVIC.

Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:30:40 +02:00
Lu Baolu
bfeaec7f7d iommu/vt-d: Correctly check format of page table in debugfs
PASID support and enable bit in the context entry isn't the right
indicator for the type of tables (legacy or scalable mode). Check
the DMA_RTADDR_SMT bit in the root context pointer instead.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Sai Praneeth <sai.praneeth.prakhya@intel.com>
Fixes: dd5142ca5d ("iommu/vt-d: Add debugfs support to show scalable mode DMAR table internals")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-09 17:29:25 +02:00
Will Deacon
2af2e72b18 iommu/arm-smmu-v3: Defer TLB invalidation until ->iotlb_sync()
Update the iommu_iotlb_gather structure passed to ->tlb_add_page() and
use this information to defer all TLB invalidation until ->iotlb_sync().
This drastically reduces contention on the command queue, since we can
insert our commands in batches rather than one-by-one.

Tested-by: Ganapatrao Kulkarni  <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-08 13:31:54 +01:00
Will Deacon
587e6c10a7 iommu/arm-smmu-v3: Reduce contention during command-queue insertion
The SMMU command queue is a bottleneck in large systems, thanks to the
spin_lock which serialises accesses from all CPUs to the single queue
supported by the hardware.

Attempt to improve this situation by moving to a new algorithm for
inserting commands into the queue, which is lock-free on the fast-path.

Tested-by: Ganapatrao Kulkarni  <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-08-08 13:31:54 +01:00
Lu Baolu
458b7c8e0d iommu/vt-d: Detach domain when move device out of group
When removing a device from an iommu group, the domain should
be detached from the device. Otherwise, the stale domain info
will still be cached by the driver and the driver will refuse
to attach any domain to the device again.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Fixes: b7297783c2 ("iommu/vt-d: Remove duplicated code for device hotplug")
Reported-and-tested-by: Vlad Buslov <vladbu@mellanox.com>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lkml.org/lkml/2019/7/26/1133
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-06 17:27:10 +02:00
Anders Roxell
11f4fe9ba3 iommu/arm-smmu: Mark expected switch fall-through
Now that -Wimplicit-fallthrough is passed to GCC by default, the
following warning shows up:

../drivers/iommu/arm-smmu-v3.c: In function ‘arm_smmu_write_strtab_ent’:
../drivers/iommu/arm-smmu-v3.c:1189:7: warning: this statement may fall
 through [-Wimplicit-fallthrough=]
    if (disable_bypass)
       ^
../drivers/iommu/arm-smmu-v3.c:1191:3: note: here
   default:
   ^~~~~~~

Rework so that the compiler doesn't warn about fall-through. Make it
clearer by calling 'BUG_ON()' when disable_bypass is set, and always
'break;'

Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-06 17:26:38 +02:00
Robin Murphy
8af23fad62 iommu/dma: Handle MSI mappings separately
MSI pages must always be mapped into a device's *current* domain, which
*might* be the default DMA domain, but might instead be a VFIO domain
with its own MSI cookie. This subtlety got accidentally lost in the
streamlining of __iommu_dma_map(), but rather than reintroduce more
complexity and/or special-casing, it turns out neater to just split this
path out entirely.

Since iommu_dma_get_msi_page() already duplicates much of what
__iommu_dma_map() does, it can easily just make the allocation and
mapping calls directly as well. That way we can further streamline the
helper back to exclusively operating on DMA domains.

Fixes: b61d271e59 ("iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}")
Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reported-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Andre Przywara <andre.przywara@arm.com>
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-06 17:22:49 +02:00
Suzuki K Poulose
67843bbaf3 drivers: Introduce device lookup variants by fwnode
Add a helper to match the firmware node handle of a device and provide
wrappers for {bus/class/driver}_find_device() APIs to avoid proliferation
of duplicate custom match functions.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: linux-usb@vger.kernel.org
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Joe Perches <joe@perches.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Link: https://lore.kernel.org/r/20190723221838.12024-4-suzuki.poulose@arm.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-30 13:07:42 +02:00
Linus Torvalds
2a11c76e53 virtio, vhost: bugfixes
Fixes in the iommu and balloon devices.
 Disable the meta-data optimization for now - I hope we can get it fixed
 shortly, but there's no point in making users suffer crashes while we
 are working on that.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJdPV3yAAoJECgfDbjSjVRp5qAIAIbzdgGkkuill7++e05fo3zJ
 Vus5ApnFb+VopuiKFAxHyrRhvFun2dftcpOEFC6qpZ1xMcErRa1JTDp+Z70gLPcf
 ZYrT7WoJv202cTQLjlrKwMA4C+hNTGf86KZWls+uzTXngbsrzib99M89wjOTP6UW
 fslOtznbaHw/oPqQSiL40vNUEhU6thnvSxWpaIGJTnU9cx508Q7dE8TpLA5UpuNj
 0y0+0HJrwlNdO2CSOay+dLEkZ/3M0vbXxwcmMNwoPIOx3N58ScCTLF3w6/Zuudco
 XGhUzY6K5UqonVRVoxXMsQru9ZiAhKGMnf3+ugUojm+riPFOrWBbMNkU7mmNIo0=
 =nw3y
 -----END PGP SIGNATURE-----

Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost

Pull virtio/vhost fixes from Michael Tsirkin:

 - Fixes in the iommu and balloon devices.

 - Disable the meta-data optimization for now - I hope we can get it
   fixed shortly, but there's no point in making users suffer crashes
   while we are working on that.

* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
  vhost: disable metadata prefetch optimization
  iommu/virtio: Update to most recent specification
  balloon: fix up comments
  mm/balloon_compaction: avoid duplicate page removal
2019-07-29 11:34:12 -07:00
Will Deacon
7c288a5b27 iommu/arm-smmu-v3: Operate directly on low-level queue where possible
In preparation for rewriting the command queue insertion code to use a
new algorithm, rework many of our queue macro accessors and manipulation
functions so that they operate on the arm_smmu_ll_queue structure where
possible. This will allow us to call these helpers on local variables
without having to construct a full-blown arm_smmu_queue on the stack.

No functional change.

Tested-by: Ganapatrao Kulkarni  <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:30:21 +01:00
Will Deacon
52be86374f iommu/arm-smmu-v3: Move low-level queue fields out of arm_smmu_queue
In preparation for rewriting the command queue insertion code to use a
new algorithm, introduce a new arm_smmu_ll_queue structure which contains
only the information necessary to perform queue arithmetic for a queue
and will later be extended so that we can perform complex atomic
manipulation on some of the fields.

No functional change.

Tested-by: Ganapatrao Kulkarni  <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:30:21 +01:00
Will Deacon
8a073da07b iommu/arm-smmu-v3: Drop unused 'q' argument from Q_OVF macro
The Q_OVF macro doesn't need to access the arm_smmu_queue structure, so
drop the unused macro argument.

No functional change.

Tested-by: Ganapatrao Kulkarni  <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:30:20 +01:00
Will Deacon
2a8868f16e iommu/arm-smmu-v3: Separate s/w and h/w views of prod and cons indexes
In preparation for rewriting the command queue insertion code to use a
new algorithm, separate the software and hardware views of the prod and
cons indexes so that manipulating the software state doesn't
automatically update the hardware state at the same time.

No functional change.

Tested-by: Ganapatrao Kulkarni  <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:30:20 +01:00
Will Deacon
3951c41af4 iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->tlb_add_page()
With all the pieces in place, we can finally propagate the
iommu_iotlb_gather structure from the call to unmap() down to the IOMMU
drivers' implementation of ->tlb_add_page(). Currently everybody ignores
it, but the machinery is now there to defer invalidation.

Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:22:59 +01:00
Will Deacon
a2d3a382d6 iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->unmap()
Update the io-pgtable ->unmap() function to take an iommu_iotlb_gather
pointer as an argument, and update the callers as appropriate.

Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:22:59 +01:00
Will Deacon
e953f7f2fa iommu/io-pgtable: Remove unused ->tlb_sync() callback
The ->tlb_sync() callback is no longer used, so it can be removed.

Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:22:58 +01:00