Commit Graph

23 Commits

Author SHA1 Message Date
Colin Ian King
f42e4b337b drm/nouveau/nouveau: fix incorrect sizeof on args.src an args.dst
The sizeof is currently on args.src and args.dst and should be on
*args.src and *args.dst. Fortunately these sizes just so happen
to be the same size so it worked, however, this should be fixed
and it also cleans up static analysis warnings

Addresses-Coverity: ("sizeof not portable")
Fixes: f268307ec7 ("nouveau: simplify nouveau_dmem_migrate_vma")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2020-01-15 10:49:58 +10:00
Christoph Hellwig
06d462beb4 mm: remove the unused MIGRATE_PFN_DEVICE flag
No one ever checks this flag, and we could easily get that information
from the page if needed.

Link: https://lore.kernel.org/r/20190814075928.23766-10-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:03 -03:00
Christoph Hellwig
f268307ec7 nouveau: simplify nouveau_dmem_migrate_vma
Factor the main copy page to vram routine out into a helper that acts
on a single page and which doesn't require the nouveau_dmem_migrate
structure for argument passing.  As an added benefit the new version
only allocates the dma address array once and reuses it for each
subsequent chunk of work.

Link: https://lore.kernel.org/r/20190814075928.23766-8-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:03 -03:00
Christoph Hellwig
bfe69ef94a nouveau: simplify nouveau_dmem_migrate_to_ram
Factor the main copy page to ram routine out into a helper that acts on
a single page and which doesn't require the nouveau_dmem_fault
structure for argument passing.  Also remove the loop over multiple
pages as we only handle one at the moment, although the structure of
the main worker function makes it relatively easy to add multi page
support back if needed in the future.  But at least for now this avoid
the needed to dynamically allocate memory for the dma addresses in
what is essentially the page fault path.

Link: https://lore.kernel.org/r/20190814075928.23766-7-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:03 -03:00
Christoph Hellwig
2ab2bda53c nouveau: factor out dmem fence completion
Factor out the end of fencing logic from the two migration routines.

Link: https://lore.kernel.org/r/20190814075928.23766-5-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:03 -03:00
Christoph Hellwig
64de8b8d65 nouveau: factor out device memory address calculation
Factor out the repeated device memory address calculation into
a helper.

Link: https://lore.kernel.org/r/20190814075928.23766-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:03 -03:00
Christoph Hellwig
dea027f282 nouveau: reset dma_nr in nouveau_dmem_migrate_alloc_and_copy
When we start a new batch of dma_map operations we need to reset dma_nr,
as we start filling a newly allocated array.

Link: https://lore.kernel.org/r/20190814075928.23766-3-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:03 -03:00
Christoph Hellwig
a7d1f22bb7 mm: turn migrate_vma upside down
There isn't any good reason to pass callbacks to migrate_vma.  Instead
we can just export the three steps done by this function to drivers and
let them sequence the operation without callbacks.  This removes a lot
of boilerplate code as-is, and will allow the drivers to drastically
improve code flow and error handling further on.

Link: https://lore.kernel.org/r/20190814075928.23766-2-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-08-20 09:35:02 -03:00
Christoph Hellwig
f32471e2cf mm/hmm: remove the legacy hmm_pfn_* APIs
Switch the one remaining user in nouveau over to its replacement, and
remove all the wrappers.

Link: https://lore.kernel.org/r/20190724065258.16603-7-hch@lst.de
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-25 16:14:40 -03:00
Linus Torvalds
31cc088a4f drm fixes for -rc1:
nouveau:
 - bugfixes + TU116 enabling (minor iteration):w
 
 amdgpu:
 - large pile of fixes for new hw support this release (navi, vega20)
 - audio hotplug fix
 - bunch of corner cases and small fixes all over for amdgpu/kfd
 
 komeda:
 - back out some new properties (from this merge window) that needs
   more pondering.
 
 bochs: fb pitch setup
 
 ... plus a new panel quirk
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEb4nG6jLu8Y5XI+PfTA9ye/CYqnEFAl0x4sAACgkQTA9ye/CY
 qnFAAw/+JJy7fo95tIVM81p8yDxugpS3+fAJNTnKIndE2behYHPnKCrRk8BhDr0O
 x5xPy4yZHOTndmpDlLUCpV6b8xOvEX+orCNWsqbI2/Kff4yqtBRXhxBhM/3byMth
 nvfjwKVHDLo6SbL0SIIhZTTYBdBDa9zbilJjY86Xn2GdSiiyF/mC3Fhx21tXVTwq
 guoaRDcHAlAwvprKube1dC5y5IXoljJg+w6ydqwma/qUP08As/g0FiI9XvUuzLmY
 ffezdDrsHZPlNIVjGKr2QMhPl6DFSzQRV5UbqXGw7f9s6vW71qtt8a9F+rFk7Ers
 Uq0mqT9VgX6qQ9aBCyXax5UyFj+xr3Owan/D1QEyrUMPpkZHdubz5cliqw20dtYy
 1KNpZtMXR29swGn7J0o/VmtFsRr86+yX9/gL2dY8QDhGCAo/7tYRdDFXBApB+Fgb
 G3Z3Q6YYib6Rom7x3oiZpraf+KY9a+N5RTTrUgvSSxvC7SxxHw/PJbnX7Cjb13fU
 luFw1qs53qv0ytg++UQWivEf5pm/FonhBFq/KikMwtD+LhdtoIm186gPexpV6eaY
 hJZnr9BDafUCwxGZQZ4y01VUwPI5neXTUur8KVOCPqBgtFSR2m6ipgEnZUk9ltLm
 l73MfVbjbvpthds/2+8XDhzB3hnwmTzJlcXN1cQ2RJOEYoBwpe4=
 =s190
 -----END PGP SIGNATURE-----

Merge tag 'drm-next-2019-07-19' of git://anongit.freedesktop.org/drm/drm

Pull drm fixes from Daniel Vetter:
 "Dave is back in shape, but now family got it so I'm doing the pull.
  Two things worthy of note:

   - nouveau feature pull was way too late, Dave&me decided to not take
     that, so Ben spun up a pull with just the fixes.

   - after some chatting with the arm display maintainers we decided to
     change a bit how that's maintained, for more oversight/review and
     cross vendor collab.

  More details below:

  nouveau:
   - bugfixes
   - TU116 enabling (minor iteration) :w

  amdgpu:
   - large pile of fixes for new hw support this release (navi, vega20)
   - audio hotplug fix
   - bunch of corner cases and small fixes all over for amdgpu/kfd

  komeda:
   - back out some new properties (from this merge window) that needs
     more pondering.

  bochs:
   - fb pitch setup

  core:
   - a new panel quirk
   - misc fixes"

* tag 'drm-next-2019-07-19' of git://anongit.freedesktop.org/drm/drm: (73 commits)
  drm/nouveau/secboot/gp102-: remove WAR for SEC2 RTOS start bug
  drm/nouveau/flcn/gp102-: improve implementation of bind_context() on SEC2/GSP
  drm/nouveau: fix memory leak in nouveau_conn_reset()
  drm/nouveau/dmem: missing mutex_lock in error path
  drm/nouveau/hwmon: return EINVAL if the GPU is powered down for sensors reads
  drm/nouveau: fix bogus GPL-2 license header
  drm/nouveau: fix bogus GPL-2 license header
  drm/nouveau/i2c: Enable i2c pads & busses during preinit
  drm/nouveau/disp/tu102-: wire up scdc parameter setter
  drm/nouveau/core: recognise TU116 chipset
  drm/nouveau/kms: disallow dual-link harder if hdmi connection detected
  drm/nouveau/disp/nv50-: fix center/aspect-corrected scaling
  drm/nouveau/disp/nv50-: force scaler for any non-default LVDS/eDP modes
  drm/nouveau/mcp89/mmu: Use mcp77_mmu_new instead of g84_mmu_new on MCP89.
  drm/amd/display: init res_pool dccg_ref, dchub_ref with xtalin_freq
  drm/amdgpu/pm: remove check for pp funcs in freq sysfs handlers
  drm/amd/display: Force uclk to max for every state
  drm/amdkfd: Remove GWS from process during uninit
  drm/amd/amdgpu: Fix offset for vmid selection in debugfs interface
  drm/amd/powerplay: update vega20 driver if to fit latest SMU firmware
  ...
2019-07-19 12:29:43 -07:00
Ralph Campbell
d304654bd7 drm/nouveau/dmem: missing mutex_lock in error path
In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before
calling nouveau_dmem_chunk_alloc() as shown when CONFIG_PROVE_LOCKING
is enabled:

[ 1294.871933] =====================================
[ 1294.876656] WARNING: bad unlock balance detected!
[ 1294.881375] 5.2.0-rc3+ #5 Not tainted
[ 1294.885048] -------------------------------------
[ 1294.889773] test-malloc-vra/6299 is trying to release lock (&drm->dmem->mutex) at:
[ 1294.897482] [<ffffffffa01a220f>] nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1294.905782] but there are no more locks to release!
[ 1294.910690]
[ 1294.910690] other info that might help us debug this:
[ 1294.917249] 1 lock held by test-malloc-vra/6299:
[ 1294.921881]  #0: 0000000016e10454 (&mm->mmap_sem#2){++++}, at: nouveau_svmm_bind+0x142/0x210 [nouveau]
[ 1294.931313]
[ 1294.931313] stack backtrace:
[ 1294.935702] CPU: 4 PID: 6299 Comm: test-malloc-vra Not tainted 5.2.0-rc3+ #5
[ 1294.942786] Hardware name: ASUS X299-A/PRIME X299-A, BIOS 1401 05/21/2018
[ 1294.949590] Call Trace:
[ 1294.952059]  dump_stack+0x7c/0xc0
[ 1294.955469]  ? nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1294.962213]  print_unlock_imbalance_bug.cold.52+0xca/0xcf
[ 1294.967641]  lock_release+0x306/0x380
[ 1294.971383]  ? nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1294.978089]  ? lock_downgrade+0x2d0/0x2d0
[ 1294.982121]  ? find_held_lock+0xac/0xd0
[ 1294.985979]  __mutex_unlock_slowpath+0x8f/0x3f0
[ 1294.990540]  ? wait_for_completion+0x230/0x230
[ 1294.995002]  ? rwlock_bug.part.2+0x60/0x60
[ 1294.999197]  nouveau_dmem_migrate_alloc_and_copy+0x79f/0xbf0 [nouveau]
[ 1295.005751]  ? page_mapping+0x98/0x110
[ 1295.009511]  migrate_vma+0xa74/0x1090
[ 1295.013186]  ? move_to_new_page+0x480/0x480
[ 1295.017400]  ? __kmalloc+0x153/0x300
[ 1295.021052]  ? nouveau_dmem_migrate_vma+0xd8/0x1e0 [nouveau]
[ 1295.026796]  nouveau_dmem_migrate_vma+0x157/0x1e0 [nouveau]
[ 1295.032466]  ? nouveau_dmem_init+0x490/0x490 [nouveau]
[ 1295.037612]  ? vmacache_find+0xc2/0x110
[ 1295.041537]  nouveau_svmm_bind+0x1b4/0x210 [nouveau]
[ 1295.046583]  ? nouveau_svm_fault+0x13e0/0x13e0 [nouveau]
[ 1295.051912]  drm_ioctl_kernel+0x14d/0x1a0
[ 1295.055930]  ? drm_setversion+0x330/0x330
[ 1295.059971]  drm_ioctl+0x308/0x530
[ 1295.063384]  ? drm_version+0x150/0x150
[ 1295.067153]  ? find_held_lock+0xac/0xd0
[ 1295.070996]  ? __pm_runtime_resume+0x3f/0xa0
[ 1295.075285]  ? mark_held_locks+0x29/0xa0
[ 1295.079230]  ? _raw_spin_unlock_irqrestore+0x3c/0x50
[ 1295.084232]  ? lockdep_hardirqs_on+0x17d/0x250
[ 1295.088768]  nouveau_drm_ioctl+0x9a/0x100 [nouveau]
[ 1295.093661]  do_vfs_ioctl+0x137/0x9a0
[ 1295.097341]  ? ioctl_preallocate+0x140/0x140
[ 1295.101623]  ? match_held_lock+0x1b/0x230
[ 1295.105646]  ? match_held_lock+0x1b/0x230
[ 1295.109660]  ? find_held_lock+0xac/0xd0
[ 1295.113512]  ? __do_page_fault+0x324/0x630
[ 1295.117617]  ? lock_downgrade+0x2d0/0x2d0
[ 1295.121648]  ? mark_held_locks+0x79/0xa0
[ 1295.125583]  ? handle_mm_fault+0x352/0x430
[ 1295.129687]  ksys_ioctl+0x60/0x90
[ 1295.133020]  ? mark_held_locks+0x29/0xa0
[ 1295.136964]  __x64_sys_ioctl+0x3d/0x50
[ 1295.140726]  do_syscall_64+0x68/0x250
[ 1295.144400]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 1295.149465] RIP: 0033:0x7f1a3495809b
[ 1295.153053] Code: 0f 1e fa 48 8b 05 ed bd 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d bd bd 0c 00 f7 d8 64 89 01 48
[ 1295.171850] RSP: 002b:00007ffef7ed1358 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 1295.179451] RAX: ffffffffffffffda RBX: 00007ffef7ed1628 RCX: 00007f1a3495809b
[ 1295.186601] RDX: 00007ffef7ed13b0 RSI: 0000000040406449 RDI: 0000000000000004
[ 1295.193759] RBP: 00007ffef7ed13b0 R08: 0000000000000000 R09: 000000000157e770
[ 1295.200917] R10: 000000000151c010 R11: 0000000000000246 R12: 0000000040406449
[ 1295.208083] R13: 0000000000000004 R14: 0000000000000000 R15: 0000000000000000

Reacquire the lock before continuing to the next page.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-07-19 16:26:51 +10:00
Christoph Hellwig
8a164fef9c mm: simplify ZONE_DEVICE page private data
Remove the clumsy hmm_devmem_page_{get,set}_drvdata helpers, and
instead just access the page directly.  Also make the page data
a void pointer, and thus much easier to use.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-02 14:32:45 -03:00
Christoph Hellwig
4239f267e3 nouveau: use devm_memremap_pages directly
Just use devm_memremap_pages instead of hmm_devmem_add pages to allow
killing that wrapper which doesn't provide a whole lot of benefits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-02 14:32:44 -03:00
Christoph Hellwig
721be86814 nouveau: use alloc_page_vma directly
hmm_vma_alloc_locked_page is scheduled to go away, use the proper
mm function directly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-02 14:32:44 -03:00
Dave Airlie
cd84579112 Merge branch 'linux-5.1' of git://github.com/skeggsb/linux into drm-fixes
Some minor nouveau dmem and other fixes.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Ben Skeggs <bskeggs@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CABDvA==kMkD6n-cS9KpQBcTU1E8p7Wc+H1ZuOhSfD7yTFJVvkw@mail.gmail.com
2019-03-22 10:39:35 +10:00
Jérôme Glisse
8385741807 drm/nouveau/dmem: empty chunk do not have a buffer object associated with them.
Empty chunk do not have a bo associated with them so no need to pin/unpin
on suspend/resume.

This fix suspend/resume on 5.1rc1 when NOUVEAU_SVM is enabled.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
Tested-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-03-22 09:58:38 +10:00
Dan Carpenter
18ec3c129b drm/nouveau/dmem: Fix a NULL vs IS_ERR() check
The hmm_devmem_add() function doesn't return NULL, it returns error
pointers.

Fixes: 5be73b6908 ("drm/nouveau/dmem: device memory helpers for SVM")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-03-22 09:57:58 +10:00
YueHaibing
2219c9ee92 drm/nouveau/dmem: remove set but not used variable 'drm'
Fixes gcc '-Wunused-but-set-variable' warning:

drivers/gpu/drm/nouveau/nouveau_dmem.c: In function 'nouveau_dmem_free':
drivers/gpu/drm/nouveau/nouveau_dmem.c:103:22: warning:
 variable 'drm' set but not used [-Wunused-but-set-variable]
  struct nouveau_drm *drm;
                      ^

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-03-22 09:57:58 +10:00
Souptick Joarder
b57e622e6d mm/hmm: convert to use vm_fault_t
Convert to use vm_fault_t type as return type for fault handler.

kbuild reported warning during testing of
*mm-create-the-new-vm_fault_t-type.patch* available in below link -
https://patchwork.kernel.org/patch/10752741/

  kernel/memremap.c:46:34: warning: incorrect type in return expression
                           (different base types)
  kernel/memremap.c:46:34: expected restricted vm_fault_t
  kernel/memremap.c:46:34: got int

This patch has fixed the warnings and also hmm_devmem_fault() is
converted to return vm_fault_t to avoid further warnings.

[sfr@canb.auug.org.au: drm/nouveau/dmem: update for struct hmm_devmem_ops member change]
  Link: http://lkml.kernel.org/r/20190220174407.753d94e5@canb.auug.org.au
Link: http://lkml.kernel.org/r/20190110145900.GA1317@jordon-HP-15-Notebook-PC
Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-12 10:04:00 -07:00
Ben Skeggs
a788ade4f6 drm/nouveau/dmem: use dma addresses during migration copies
Removes the need for temporary VMM mappings.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-02-20 09:00:03 +10:00
Ben Skeggs
fd5e985643 drm/nouveau/dmem: use physical vram addresses during migration copies
Removes the need for temporary VMM mappings.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-02-20 09:00:03 +10:00
Ben Skeggs
6c762d1b18 drm/nouveau/dmem: extend copy function to allow direct use of physical addresses
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2019-02-20 09:00:03 +10:00
Jérôme Glisse
5be73b6908 drm/nouveau/dmem: device memory helpers for SVM
Device memory can be use in SVM, in which case we do not have any of
the existing buffer object. This commit add infrastructure to allow
use of device memory without nouveau_bo. Again this is a temporary
solution until a rework of GPU memory management.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
2019-02-20 09:00:02 +10:00