Remove the function nouveau_bo_rd16() that is not used anywhere.
This was partially found by using a static code analysis program
called cppcheck.
Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We have the ability to move buffers around in the kernel if necessary,
and should probably use it rather than failing if userspace passes us
a non-contig buffer for a plane.
The NOUVEAU_GEM_TILE_NONCONTIG flag from userspace will become a mere
initial placement hint once all the relevant paths have been updated.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
On architectures for which access to GPU memory is non-coherent,
caches need to be flushed and invalidated explicitly when BO control
changes between CPU and GPU.
This patch adds buffer synchronization functions which invokes the
correct API (PCI or DMA) to ensure synchronization is effective.
Based on the TTM DMA cache helper patches by Lucas Stach.
Signed-off-by: Lucas Stach <dev@lynxeye.de>
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Allow nouveau_bo_new() to recognize the TTM_PL_FLAG_UNCACHED flag, which
means that we want the allocated BO to be perfectly coherent between the
CPU and GPU. This is useful on non-coherent architectures for which we
do not want to manually sync some rarely-accessed buffers: typically,
fences and pushbuffers.
A TTM BO allocated with the TTM_PL_FLAG_UNCACHED on a non-coherent
architecture will be populated using the DMA API, and accesses to it
performed using the coherent mapping performed by dma_alloc_coherent().
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
v2: Don't forget git add, noticed by David.
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Acked-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
nouveau keeps track in userspace whether a buffer is being
written to or being read, but it doesn't use that information.
Change this to allow multiple readers on the same bo.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Ben Skeggs <bskeggs@redhat.com>
This allows us to more fine grained specify where to place the buffer object.
v2: rebased on drm-next, add bochs changes as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
There is no reason to keep the gem object separately allocated. nouveau is
the last user of gem_obj->driver_private, so if we embed it, we can get
rid of 8bytes per gem-object.
The implementation follows the radeon driver. bo->gem is only valid, iff
the bo was created via the gem helpers _and_ iff the user holds a valid
gem reference. That is, as the gem object holds a reference to the
nouveau_bo. If you use nouveau_ref() to gain a bo reference, you are not
guaranteed to also hold a gem reference. The gem object might get
destroyed after the last user drops the gem-ref via
drm_gem_object_unreference(). Use drm_gem_object_reference() to gain a
gem-reference.
For debugging, we can use bo->gem.filp != NULL to test whether a gem-bo is
valid. However, this shouldn't be used for real functionality to avoid
gem-internal dependencies.
Note that the implementation follows the previous style. However, we no
longer can check for bo->gem != NULL to test for a valid gem object. This
wasn't done before, so we should be safe now.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Simplify the Nouveau prime implementation by using the default behavior provided
by drm_gem_prime_import and drm_gem_prime_export.
v2: Rename functions to nouveau_gem_prime_get_sg_table and
nouveau_gem_prime_import_sg_table.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Airlie <airlied@linux.ie>
Signed-off-by: Dave Airlie <airlied@redhat.com>
... by moving the bo_pin/bo_unpin manipulation of the pin_refcount
under the protection of the ttm reservation lock. pin/unpin seems
to get called from all over the place, so atm this is completely racy.
After this patch there are only a few places in cleanup functions
left which access ->pin_refcount without locking. But I'm hoping that
those are safe and some other code invariant guarantees that this
won't blow up.
In any case, I only need to fix up pin/unpin to make ->pageflip work
safely, so let's keep it at that.
Add a comment to the header to explain the new locking rule.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All items on the lru list are always reservable, so this is a stupid
thing to keep. Not only that, it is used in a way which would
guarantee deadlocks if it were ever to be set to block on reserve.
This is a lot of churn, but mostly because of the removal of the
argument which can be nested arbitrarily deeply in many places.
No change of code in this patch except removal of the no_wait_reserve
argument, the previous patch removed the use of no_wait_reserve.
v2:
- Warn if -EBUSY is returned on reservation, all objects on the list
should be reservable. Adjusted patch slightly due to conflicts.
v3:
- Focus on no_wait_reserve removal only.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Kepler PFIFO lost the ability to address multiple engines from a single
channel, so we need a separate one for the copy engine.
v2: Marcin Slusarz <marcin.slusarz@gmail.com>
- regression fix: restore hw accelerated buffer copies
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is a HUGE commit, but it's not nearly as bad as it looks - any problems
can be isolated to a particular chipset and engine combination. It was
simply too difficult to port each one at a time, the compat layers are
*already* ridiculous.
Most of the changes here are simply to the glue, the process for each of the
engine modules was to start with a standard skeleton and copy+paste the old
code into the appropriate places, fixing up variable names etc as needed.
v2: Marcin Slusarz <marcin.slusarz@gmail.com>
- fix find/replace bug in license header
v3: Ben Skeggs <bskeggs@redhat.com>
- bump indirect pushbuf size to 8KiB, 4KiB barely enough for userspace and
left no space for kernel's requirements during GEM pushbuf submission.
- fix duplicate assignments noticed by clang
v4: Marcin Slusarz <marcin.slusarz@gmail.com>
- add sparse annotations to nv04_fifo_pause/nv04_fifo_start
- use ioread32_native/iowrite32_native for fifo control registers
v5: Ben Skeggs <bskeggs@redhat.com>
- rebase on v3.6-rc4, modified to keep copy engine fix intact
- nv10/fence: unmap fence bo before destroying
- fixed fermi regression when using nvidia gr fuc
- fixed typo in supported dma_mask checking
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>