This is required to properly handle failing dpms calls.
When making a wait in i915 interruptible, I've noticed
that the dpms sequence could fail with -ERESTARTSYS because
it was waiting interruptibly for flips. So from now on
allow drivers to fail in their connector dpms callback.
Encoder and crtc dpms callbacks are unaffected.
Changes since v1:
- Update kerneldoc for the drm helper functions.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
[danvet: Resolve conflicts due to different merge order.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Previously output drivers would enable continuous display mode and power
up the display controller at various points during the initialization.
This is suboptimal because it accesses display controller registers in
output drivers and duplicates a bit of code.
Move this code into the display controller driver and enable the display
controller as the final step of the ->mode_set_nofb() implementation.
Signed-off-by: Thierry Reding <treding@nvidia.com>
All output drivers have now been converted to use the ->atomic_check()
callback, so the ->mode_fixup() callbacks are no longer used.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The implementation of the ->atomic_check() callback precomputes all
parameters to check if the given configuration can be applied. If so the
precomputed values are stored in the atomic state object for the encoder
and applied during modeset. In that way the modeset no longer needs to
perform any checking but simply program values into registers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Hook up the default ->reset() and ->atomic_duplicate_state() helpers.
This ensures that state objects are properly created and framebuffer
reference counts correctly maintained.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Implement initial atomic state handling. Hook up the CRTCs, planes' and
connectors' ->atomic_destroy_state() callback to ensure that the atomic
state objects don't leak.
Furthermore the CRTC now implements the ->mode_set_nofb() callback that
is used by new helpers to implement ->mode_set() and ->mode_set_base().
These new helpers also make use of the new plane helper functions which
the driver now provides.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The tegra_output midlayer is now completely gone and output drivers use
it purely as a helper library.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The debugfs cleanup code never fails, so no error is returned. Therefore
the functions can all return void instead.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Implement encoder and connector within the DSI driver itself using the
Tegra output helpers rather than using the Tegra output as midlayer. By
doing so one level of indirection is removed and output drivers become
more flexible while keeping the majority of the advantages provided by
the common output helpers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Previously output drivers would all stop the display controller in their
disable path. However with the transition to atomic modesetting the
display controller needs to be kept running until all planes have been
disabled so that software can properly determine (using VBLANK counts)
when it is safe to remove the framebuffers associated with the planes.
Moving this code into the display controller's disable path also gets
rid of the duplication of this into all output drivers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
All output drivers have open-coded variants of this function, so export
it to remove some code duplication.
Signed-off-by: Thierry Reding <treding@nvidia.com>
This allows a DRM driver unload/reload cycle to completely reset the DSI
controller and may help in situations where it's broken.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Use a sized unsigned 32-bit data type (u32) to store register contents.
The DSI registers are 32 bits wide irrespective of the architecture's
data width.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Make sure the DSI PHY_TIMING and BTA_TIMING registers are initialized
when the clocks are set up as opposed to when the output is enabled.
This makes sure that the PHY timings are properly set up when the panel
is prepared and that DCS commands sent at that time use the appropriate
timings.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Add support for sending MIPI DSI command packets from the host to a
peripheral. This is required for panels that need configuration before
they accept video data.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Implement ganged mode support for the Tegra DSI driver. The DSI host
controller to gang up with is specified via a phandle in the device tree
and the resolved DSI host controller used for the programming of the
ganged-mode registers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In preparation for adding ganged-mode support, this commit splits out
the tegra_dsi_set_timeout() function so that it can be reused for the
slave DSI controller.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Add support for DC-driven command mode. This is a mode where the video
stream sent by the display controller is packed into DCS command packets
(write_memory_start and write_memory_continue) by the DSI controller. It
can be used for panels with a remote framebuffer and is useful to save
power when used with a dynamic refresh rate (not yet supported by the
driver).
Signed-off-by: Thierry Reding <treding@nvidia.com>
For command mode panels, the DSI controller needs to be enabled and
configured so that panel drivers can send commands prior to the video
stream being enabled.
Move code from the monolithic output enable/disable functions into
smaller, reusable units to allow more fine-grained control over the
controller state.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The driver wasn't even attempting to do any cleanup when probing failed.
Fix this by releasing any resources acquired up to the point of failure
and putting the device back into the original state (reset, clocks off).
Signed-off-by: Thierry Reding <treding@nvidia.com>
DSI panels can always be hotplugged via the DSI bus' attach/detach
infrastructure, so unconditionally mark the connector hotpluggable.
While at it, also make sure that when a panel is detached the connector
is marked unconnected before calling into the DRM hotplug helpers to
reflect the correct state.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The common clock framework will take care of preparing and enabling the
parent of the DSI clock automatically.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In preparation for supporting command mode panels, don't disable the
clock when the output is disabled. The output will be enabled only after
the panel has been programmed in command mode, so the clock must always
remain on.
As a side-effect, pad calibration now only needs to be done at driver
probe time, since neither power nor controller state will go away before
driver removal. While at it, use a 32-bit variable to store register
content because the registers are 32-bit even on 64-bit Tegra.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Rather than hardcoding them as macros, make the host and video FIFO
depths parameters so that they can be more easily adjusted if a new
generation of the Tegra SoC changes them.
While at it, set the depth of the video FIFO to the correct value of
1920 *words* rather than *bytes*.
Signed-off-by: Thierry Reding <treding@nvidia.com>
When tegra-drm.ko is built as a module, these MODULE_DEVICE_TABLEs allow
the module to be auto-loaded since the module will match the devices
instantiated from device tree.
(Notes for stable: in 3.14+, just git rm any conflicting file, since they
are added in later kernels. For 3.13 and below, manual merging will be
needed)
Cc: <stable@vger.kernel.org>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Handle the MIPI_DSI_CLOCK_NONCONTINUOUS flag and only set TX-only
clock behavior when this flag is present to allow panels requiring
continuous clock mode to operate with this driver.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
The DRM core can now cope with drivers that don't have an associated
struct drm_bus, so the host1x implementation is no longer useful.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In some cases the pixel clock used to not be correct, which is why it
had to be recomputed. It turns out that the reason why it wasn't correct
is that it was used wrongly. If used correctly there's not need for the
recomputation.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The shift clock divider is highly dependent on the type of output, so
push computation of it down into the output drivers. The old code used
to work merely by accident.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Assert the DSI controller's reset when the driver is unloaded to reduce
power consumption and to put the controller into a known state for
subsequent driver reloads.
Signed-off-by: Thierry Reding <treding@nvidia.com>
To prevent the enable or disable operations to potentially be run
multiple times, add guards to return early when the output is already
in the targetted state.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The packet sequencer needs to be programmed depending on the video mode
of the attached peripheral. Add support for non-burst video modes with
sync events (as opposed to sync pulses) and select either sequence
depending on the video mode.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The DSI controllers are powered by a (typically 1.2V) regulator. Usually
this is always on, so there was no need to support enabling or disabling
it thus far. But in order not to consume any power when DSI is inactive,
give the driver a chance to enable or disable the supply as needed.
Signed-off-by: Thierry Reding <treding@nvidia.com>
A bunch of registers are initialized to 0 upon during driver probe. It
turns out that none of these are actually needed, so they can simply be
dropped.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The pixel format enumeration values used by the Tegra DSI controller
don't match those defined by the DSI framework. Make sure to convert
them to the internal format before writing it to the register.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The majority of the code in this driver is licensed under the GPL v2, so
relicense the rest under GPL v2 as well for consistency.
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Some of the code in the CRTC's mode setting code is specific to the RGB
output or needs to be called slightly differently depending on the type
of output. Push that code down into the output drivers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In case of error, the devm_ioremap_resource() function returns ERR_PTR()
and never NULL. The NULL test in the return value check should therefore
be replaced with IS_ERR().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Thierry Reding <treding@nvidia.com>
This commit adds support for both DSI outputs found on Tegra. Only very
minimal functionality is implemented, so advanced features like ganged
mode won't work.
Due to the lack of other test hardware, some sections of the driver are
hardcoded to work with Dalmore.
Signed-off-by: Thierry Reding <treding@nvidia.com>