Current implementation does not guarantee packed pixel modes working
with every dongle. There are some dongles, which require selecting
the output mode explicitly.
Write proper values to registers in packed_pixel mode, based on how it
is done in vendor's code. Select output color space: RGB
(no packed pixel) or YCBCR422 (packed pixel).
This reverts commit e8b92efa62
("drm/bridge/sii8620: fix display of packed pixel modes in MHL2").
Signed-off-by: Maciej Purski <m.purski@samsung.com>
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530204243-6370-3-git-send-email-m.purski@samsung.com
Currently AVI infoframe is sent only in MHL3. However, some MHL2 dongles
need AVI infoframe to work correctly in either packed pixel mode or
non-packed pixel mode.
Send AVI infoframe in set_infoframes() in every case. Create an
infoframe using drm_hdmi_infoframe_from_display_mode() instead of
manually filling each infoframe structure's field.
Signed-off-by: Maciej Purski <m.purski@samsung.com>
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530204243-6370-2-git-send-email-m.purski@samsung.com
This change updates the device headers to the latest device version.
Where renaming affects the existing code, it's updated accordingly.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
The buffer object backing the user fence is reserved using the non-user
fence, i.e., as soon as the non-user fence is signaled, the user fence
buffer object can be moved or even destroyed.
Therefore, emit the user fence first.
Both fences have the same cache invalidation behavior, so this should
have no user-visible effect.
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Two requests have come in for a backmerge,
and I've got some pull reqs on rc2, so this
just makes sense.
Signed-off-by: Dave Airlie <airlied@redhat.com>
We've long ago given up on enforcing the device format is always little
endian, so remove a leftover conversion.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Make the host message module function declarations similar to the other
declarations in vmwgfx_drv.h and include the header in vmwgfx_msg.c
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Reorganize the fence wait loop somewhat to make it look more like the
examples in set_current_state() kerneldoc, and add some code comments.
Also if we're about to time out, make sure we check again whether the fence
is actually signaled.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Make sure the error messages are a bit more descriptive, so that
a log reader may understand what's gone wrong.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
As gui_x/y positioning is display unit is protected by
requested_layout_mutex adding vmw_connector_state copy of the same and
modeset commit will refer the state copy to sync with modeset_check
state.
v2: Tested with CONFIG_PROVE_LOCKING enabled.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
To avoid race condition between update_layout ioctl and modeset ioctl
for access to gui_x/y positioning added a new mutex
requested_layout_mutex.
Also used drm_for_each_connector_iter to iterate over connector list.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
This validation is not required because user-space will send create_fb
request once the memory is allocated. This check should be performed
during mode-setting.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
For cases when full modeset is not requested like page-flip, skip
memory validation as the topology is not changed.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Call the same display memory validation function which is used by
modeset_check. This ensure consistency that kernel change preferred
mode/topology only if supported.
Also change the internal function to use drm_rect instead of
drm_vmw_rect.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
This patch adds display (primary) memory validation during modeset
check. Display memory validation are applicable to both SOU and STDU,
so allow both display unit to undergo this check.
Also added check for SVGA_CAP_NO_BB_RESTRICTION capability which lifts
bounding box restriction for STDU.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
vmw_kms_atomic_check_modeset() is currently checking config using the
legacy state, which is updated after a commit has happened.
This means vmw_kms_atomic_check_modeset() will reject an invalid config
on the next update rather than the current one.
Fix this by using the new states for config checking
Signed-off-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Previously when evicting resources we were unconditionally calling
ttm_eu_reserve_buffers with a NULL ww acquire context. That meant all
buffer object reserves were done using trylock semantics.
That makes sense when evicting during resource validation, because then
there already are a number of buffers reserved and using waiting locks
would cause lockdep errors.
That's not the case when unconditionally evicting all resources as part
of driver takedown or hibernation, so in that code path, make sure
we have a ww acquire context to get waiting lock buffer object reserve
semantics.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Only try to unmap cached maps when the buffer is moved into or out from
vram. Otherwise the underlying pages stay the same.
Also when unbinding resources from MOBs about to move, make sure we're
really moving out of MOB memory.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
It makes more sense to have all the buffer object related code in
a single file rather than splitting it up between the resource code
and buffer object pinning utilities.
Place all buffer object related code in vmwgfx_bo.c. Fix up headers
and export resource functionality when needed in the buffer object
code.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Initially vmware buffer objects were only used as DMA buffers, so the name
DMA buffer was a natural one. However, currently they are used also as
dumb buffers and MOBs backing guest backed objects so renaming them to
buffer objects is logical. Particularly since there is a dmabuf subsystem
in the kernel where a dma buffer means something completely different.
This also renames user-space api structures and IOCTL names
correspondingly, but the old names remain defined for now and the ABI
hasn't changed.
There are a couple of minor style changes to make checkpatch happy.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
The current Wound-Wait mutex algorithm is actually not Wound-Wait but
Wait-Die. Implement also Wound-Wait as a per-ww-class choice. Wound-Wait
is, contrary to Wait-Die a preemptive algorithm and is known to generate
fewer backoffs. Testing reveals that this is true if the
number of simultaneous contending transactions is small.
As the number of simultaneous contending threads increases, Wait-Wound
becomes inferior to Wait-Die in terms of elapsed time.
Possibly due to the larger number of held locks of sleeping transactions.
Update documentation and callers.
Timings using git://people.freedesktop.org/~thomash/ww_mutex_test
tag patch-18-06-15
Each thread runs 100000 batches of lock / unlock 800 ww mutexes randomly
chosen out of 100000. Four core Intel x86_64:
Algorithm #threads Rollbacks time
Wound-Wait 4 ~100 ~17s.
Wait-Die 4 ~150000 ~19s.
Wound-Wait 16 ~360000 ~109s.
Wait-Die 16 ~450000 ~82s.
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: David Airlie <airlied@linux.ie>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-doc@vger.kernel.org
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Co-authored-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
In commits:
34a2ab5e06 ("drm: Add acquire ctx parameter to ->update_plane")
1931529448 ("drm: Add acquire ctx parameter to ->plane_disable")
a pointer to a drm_modeset_acquire_ctx structure was added as an
argument to the method prototypes. The transitional helpers are
supposed to be directly plugged in as implementations of these
methods, but doing so generates a warning. Add the missing
argument.
A number of buggy users were added for drm_plane_helper_disable()
which need to be fixed up for this change, which we do by passing
a NULL ctx argument.
Fixes: 1931529448 ("drm: Add acquire ctx parameter to ->plane_disable")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/E1fa1Zr-0005gT-VF@rmk-PC.armlinux.org.uk
new_active_crtcs is a bitmask, new_active_crtc_count is the
actual count.
Reviewed-by: Rex Zhu <Rex.Zhu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The PIPEDSL freezes on PSR entry and if PSR hasn't fully exited, then
the pipe_update_start call schedules itself out to check back later.
On ChromeOS-4.4 kernel, which is fairly up-to-date w.r.t drm/i915 but
lags w.r.t core kernel code, hot plugging an external display triggers
tons of "potential atomic update errors" in the dmesg, on *pipe A*. A
closer analysis reveals that we try to read the scanline 3 times and
eventually timeout, b/c PSR hasn't exited fully leading to a PIPEDSL
stuck @ 1599. This issue is not seen on upstream kernels, b/c for *some*
reason we loop inside intel_pipe_update start for ~2+ msec which in this
case is more than enough to exit PSR fully, hence an *unstuck* PIPEDSL
counter, hence no error. On the other hand, the ChromeOS kernel spends
~1.1 msec looping inside intel_pipe_update_start and hence errors out
b/c the source is still in PSR.
Regardless, we should wait for PSR exit (if PSR is disabled, we incur
a ~1-2 usec penalty) before reading the PIPEDSL, b/c if we haven't
fully exited PSR, then checking for vblank evasion isn't actually
applicable.
v4: Comment explaining psr_wait after enabling VBL interrupts (DK)
v5: CAN_PSR() to handle platforms that don't support PSR.
v6: Handle local_irq_disable on early return (Chris)
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-2-tarun.vyas@intel.com
This is a lockless version of the exisiting psr_wait_for_idle().
We want to wait for PSR to idle out inside intel_pipe_update_start.
At the time of a pipe update, we should never race with any psr
enable or disable code, which is a part of crtc enable/disable.
The follow up patch will use this lockless wait inside pipe_update_
start to wait for PSR to idle out before checking for vblank evasion.
We need to keep the wait in pipe_update_start to as less as it can be.
So,we can live and flourish w/o taking any psr locks at all.
Even if psr is never enabled, psr2_enabled will be false and this
function will wait for PSR1 to idle out, which should just return
immediately, so a very short (~1-2 usec) wait for cases where PSR
is disabled.
v2: Add comment to explain the 25msec timeout (DK)
v3: Rename psr_wait_for_idle to __psr_wait_for_idle_locked to avoid
naming conflicts and propagate err (if any) to the caller (Chris)
v5: Form a series with the next patch
v7: Better explain the need for lockless wait and increase the max
timeout to handle refresh rates < 60 Hz (Daniel Vetter)
v8: Rebase
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-1-tarun.vyas@intel.com