The bug is here:
return encoder;
The list iterator value 'encoder' will *always* be set and non-NULL
by drm_for_each_encoder_mask(), so it is incorrect to assume that the
iterator value will be NULL if the list is empty or no element found.
Otherwise it will bypass some NULL checks and lead to invalid memory
access passing the check.
To fix this bug, just return 'encoder' when found, otherwise return
NULL.
Cc: stable@vger.kernel.org
Fixes: 12885ecbfe ("drm/nouveau/kms/nvd9-: Add CRC support")
Signed-off-by: Xiaomeng Tong <xiam0nd.tong@gmail.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
[Changed commit title]
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220327073925.11121-1-xiam0nd.tong@gmail.com
This way we finally fix the problem that new resource are
not immediately evict-able after allocation.
That has caused numerous problems including OOM on GDS handling
and not being able to use TTM as general resource manager.
v2: stop assuming in ttm_resource_fini that res->bo is still valid.
v3: cleanup kerneldoc, add more lockdep annotation
v4: consistently use res->num_pages
Signed-off-by: Christian König <christian.koenig@amd.com>
Tested-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20220321132601.2161-1-christian.koenig@amd.com
On devices with DMM, all allocations are done through either DMM or TILER.
DMM/TILER being a limited resource means that such allocations will start
to fail before actual free memory is exhausted. What is even worse is that
with time DMM/TILER space gets fragmented to the point that even if we have
enough free DMM/TILER space and free memory, allocation fails because there
is no big enough free block in DMM/TILER space.
Such failures can be easily observed with OMAP xorg DDX, for example -
starting few GUI applications (so buffers for their windows are allocated)
and then rotating landscape<->portrait while closing and opening new
windows soon results in allocation failures.
Fix that by mapping buffers through DMM/TILER only when really needed,
like, for scanout buffers.
Signed-off-by: Ivaylo Dimitrov <ivo.g.dimitrov.75@gmail.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1642587791-13222-4-git-send-email-ivo.g.dimitrov.75@gmail.com
Currently code allocates non-scanout BOs from SHMEM and those objects are
accessible to userspace by mmap(). However, on devices with no DMM (like
OMAP3), the same objects are not accessible by kernel drivers that want to
render to them as code refuses to export them. In turn this means that on
devices with no DMM, all buffers must be allocated as scanout, otherwise
only CPU can access them. On those devices, scanout buffers are allocated
from CMA, making those allocations highly unreliable.
Fix that by implementing functionality to export SHMEM backed buffers on
devices with no DMM. This makes CMA memory only being used when needed,
instead for every buffer that has to be off-CPU rendered.
Tested on Motorola Droid4 and Nokia N900
Signed-off-by: Ivaylo Dimitrov <ivo.g.dimitrov.75@gmail.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1642587791-13222-3-git-send-email-ivo.g.dimitrov.75@gmail.com
Currently we take the max_bpc property as the bpc value and do not try
anything else.
However, what the other drivers seem to be doing is that they would try
with the highest bpc allowed by the max_bpc property and the hardware
capabilities, test if it results in an acceptable configuration, and if
not decrease the bpc and try again.
Let's use the same logic.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Acked-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220222164042.403112-7-maxime@cerno.tech
If DRM_ITE_IT6505 is y but DRM_DP_HELPER is m, building failed:
drivers/gpu/drm/bridge/ite-it6505.o: In function `it6505_i2c_remove':
ite-it6505.c:(.text+0x35c): undefined reference to `drm_dp_aux_unregister'
drivers/gpu/drm/bridge/ite-it6505.o: In function `it6505_dpcd_read':
ite-it6505.c:(.text+0x420): undefined reference to `drm_dp_dpcd_read'
drivers/gpu/drm/bridge/ite-it6505.o: In function `it6505_get_dpcd':
ite-it6505.c:(.text+0x4a4): undefined reference to `drm_dp_dpcd_read'
drivers/gpu/drm/bridge/ite-it6505.o: In function `it6505_dpcd_write':
ite-it6505.c:(.text+0x52c): undefined reference to `drm_dp_dpcd_write'
Select DRM_DP_HELPER for DRM_ITE_IT6505 to fix this.
Fixes: b5c84a9edc ("drm/bridge: add it6505 driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Robert Foss <robert.foss@linaro.org>
Signed-off-by: Robert Foss <robert.foss@linaro.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20220317094724.25972-1-yuehaibing@huawei.com
This function allows to replace fences from the shared fence list when
we can gurantee that the operation represented by the original fence has
finished or no accesses to the resources protected by the dma_resv
object any more when the new fence finishes.
Then use this function in the amdkfd code when BOs are unmapped from the
process.
v2: add an example when this is usefull.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-1-christian.koenig@amd.com
struct drm_display_mode embeds a list head, so overwriting
the full struct with another one will corrupt the list
(if the destination mode is on a list). Use drm_mode_copy()
instead which explicitly preserves the list head of
the destination mode.
Even if we know the destination mode is not on any list
using drm_mode_copy() seems decent as it sets a good
example. Bad examples of not using it might eventually
get copied into code where preserving the list head
actually matters.
Obviously one case not covered here is when the mode
itself is embedded in a larger structure and the whole
structure is copied. But if we are careful when copying
into modes embedded in structures I think we can be a
little more reassured that bogus list heads haven't been
propagated in.
@is_mode_copy@
@@
drm_mode_copy(...)
{
...
}
@depends on !is_mode_copy@
struct drm_display_mode *mode;
expression E, S;
@@
(
- *mode = E
+ drm_mode_copy(mode, &E)
|
- memcpy(mode, E, S)
+ drm_mode_copy(mode, E)
)
@depends on !is_mode_copy@
struct drm_display_mode mode;
expression E;
@@
(
- mode = E
+ drm_mode_copy(&mode, &E)
|
- memcpy(&mode, E, S)
+ drm_mode_copy(&mode, E)
)
@@
struct drm_display_mode *mode;
@@
- &*mode
+ mode
Cc: Jyri Sarha <jyri.sarha@iki.fi>
Cc: Tomi Valkeinen <tomba@kernel.org>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220218100403.7028-17-ville.syrjala@linux.intel.com
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
struct drm_display_mode embeds a list head, so overwriting
the full struct with another one will corrupt the list
(if the destination mode is on a list). Use drm_mode_copy()
instead which explicitly preserves the list head of
the destination mode.
Even if we know the destination mode is not on any list
using drm_mode_copy() seems decent as it sets a good
example. Bad examples of not using it might eventually
get copied into code where preserving the list head
actually matters.
Obviously one case not covered here is when the mode
itself is embedded in a larger structure and the whole
structure is copied. But if we are careful when copying
into modes embedded in structures I think we can be a
little more reassured that bogus list heads haven't been
propagated in.
@is_mode_copy@
@@
drm_mode_copy(...)
{
...
}
@depends on !is_mode_copy@
struct drm_display_mode *mode;
expression E, S;
@@
(
- *mode = E
+ drm_mode_copy(mode, &E)
|
- memcpy(mode, E, S)
+ drm_mode_copy(mode, E)
)
@depends on !is_mode_copy@
struct drm_display_mode mode;
expression E;
@@
(
- mode = E
+ drm_mode_copy(&mode, &E)
|
- memcpy(&mode, E, S)
+ drm_mode_copy(&mode, E)
)
@@
struct drm_display_mode *mode;
@@
- &*mode
+ mode
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220218100403.7028-8-ville.syrjala@linux.intel.com
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
With very limited vram on svga3 it's difficult to handle all the surface
migrations. Without gbobjects, i.e. the ability to store surfaces in
guest mobs, there's no reason to support intermediate svga2 features,
especially because we can fall back to fb traces and svga3 will never
support those in-between features.
On svga3 we wither want to use fb traces or screen targets
(i.e. gbobjects), nothing in between. This fixes presentation on a lot
of fusion/esxi tech previews where the exposed svga3 caps haven't been
finalized yet.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Fixes: 2cd80dbd35 ("drm/vmwgfx: Add basic support for SVGA3")
Cc: <stable@vger.kernel.org> # v5.14+
Reviewed-by: Martin Krastev <krastevm@vmware.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220318174332.440068-5-zack@kde.org
Writes to SVGA_REG_CURSOR_MOBID did not wait for the buffers to be fully
populated. This sometimes results in the device not being aware of
the buffer when the cursor mob register was written.
Properly wait for the buffer to be fully populated before setting it
as a cursor mob.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Fixes: 485d98d472 ("drm/vmwgfx: Add support for CursorMob and CursorBypass 4")
Reviewed-by: Martin Krastev <krastevm@vmware.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220318174332.440068-3-zack@kde.org
vmw_move assumed that buffers to be moved would always be
vmw_buffer_object's but after introduction of new placement for mob
pages that's no longer the case.
The resulting invalid read didn't have any practical consequences
because the memory isn't used unless the object actually is a
vmw_buffer_object.
Fix it by moving the cast to the spot where the results are used.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Fixes: f6be23264b ("drm/vmwgfx: Introduce a new placement for MOB page tables")
Reported-by: Chuck Lever III <chuck.lever@oracle.com>
Reviewed-by: Martin Krastev <krastevm@vmware.com>
Tested-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220318174332.440068-2-zack@kde.org
Easily hit the below list corruption:
==
list_add corruption. prev->next should be next (ffffffffc0ceb090), but
was ffffec604507edc8. (prev=ffffec604507edc8).
WARNING: CPU: 65 PID: 3959 at lib/list_debug.c:26
__list_add_valid+0x53/0x80
CPU: 65 PID: 3959 Comm: fbdev Tainted: G U
RIP: 0010:__list_add_valid+0x53/0x80
Call Trace:
<TASK>
fb_deferred_io_mkwrite+0xea/0x150
do_page_mkwrite+0x57/0xc0
do_wp_page+0x278/0x2f0
__handle_mm_fault+0xdc2/0x1590
handle_mm_fault+0xdd/0x2c0
do_user_addr_fault+0x1d3/0x650
exc_page_fault+0x77/0x180
? asm_exc_page_fault+0x8/0x30
asm_exc_page_fault+0x1e/0x30
RIP: 0033:0x7fd98fc8fad1
==
Figure out the race happens when one process is adding &page->lru into
the pagelist tail in fb_deferred_io_mkwrite(), another process is
re-initializing the same &page->lru in fb_deferred_io_fault(), which is
not protected by the lock.
This fix is to init all the page lists one time during initialization,
it not only fixes the list corruption, but also avoids INIT_LIST_HEAD()
redundantly.
V2: change "int i" to "unsigned int i" (Geert Uytterhoeven)
Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com>
Fixes: 105a940416 ("fbdev/defio: Early-out if page is already enlisted")
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20220318005003.51810-1-chuansheng.liu@intel.com