When fbdev initialization fails, make sure to unreference the GEM
objects properly. Note that we can't do this in the general error
unwinding path because ownership of the GEM object references is
transferred to the framebuffer upon creation.
Signed-off-by: Thierry Reding <treding@nvidia.com>
When the DRM device is torn down and the connector is removed, make sure
to detach the panel to make sure there are no dangling pointers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Drop a reference instead of directly calling the framebuffer .destroy()
callback at fbdev free time. This is necessary to make sure the object
isn't destroyed if anyone else still has a reference.
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Thierry Reding <treding@nvidia.com>
When creating a dumb buffer object using the DRM_IOCTL_MODE_CREATE_DUMB
IOCTL, only the width, height, bpp and flags parameters are inputs. The
caller is not guaranteed to zero out or set handle, pitch and size, so
the driver must not treat these values as possible inputs.
Fixes a bug where running the Weston compositor on Tegra DRM would cause
an attempt to allocate a 3 GiB framebuffer to be allocated.
Fixes: de2ba664c3 ("gpu: host1x: drm: Add memory manager and fb")
Cc: stable@vger.kernel.org
Signed-off-by: Thierry Reding <treding@nvidia.com>
The hotplug handling needs access to the DRM device, which only appears
at ->init() time. Disable interrupts up until that time. Similarly, when
an output is removed, disable the hotplug interrupt again because the
DRM device (and with it the hotplug infrastructure) is going away.
Also make sure to only access the DRM device if it's available. Given
the above change for the hotplug interrupt this should really never
happen, but the extra check doesn't hurt either.
Signed-off-by: Thierry Reding <treding@nvidia.com>
This allows the primary plane and cursor to be exposed as regular
DRM/KMS planes, which is a prerequisite for atomic modesetting and gives
userspace more flexibility over controlling them.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Using an unsigned long type will cause these variables to become 64-bit
on 64-bit SoCs. In practice this should always work, but there's no need
for carrying around the additional 32 bits.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The sequence to commit changes to the DC, window or cursor configuration
is repetitive and can be extracted into separate functions for ease of
use.
Signed-off-by: Thierry Reding <treding@nvidia.com>
When an IOMMU device is available on the platform bus, allocate an IOMMU
domain and attach the display controllers to it. The display controllers
can then scan out non-contiguous buffers by mapping them through the
IOMMU.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The DRM driver's ->load() implementation didn't do a good job (no job at
all really) cleaning up on failure. Fix that by undoing any prior setup
when an error occurs. This requires a bit of rework to make it possible
to clean up fbdev midway.
This was tested by injecting errors at various points during the
initialization sequence and verifying that error cleanup didn't crash
and no memory leaked (using kmemleak).
Reported-by: Stéphane Marchesin <marcheu@chromium.org>
Reported-by: Sean Paul <seanpaul@chromium.org>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
The drm_gem_object_release() function already performs this cleanup, so
there is no reason to do it explicitly.
Signed-off-by: Thierry Reding <treding@nvidia.com>
There is only a single location where the function needs to do cleanup.
Skip the error unwinding path and call the cleanup function directly
instead.
Signed-off-by: Thierry Reding <treding@nvidia.com>
This function implements the common buffer object allocation used for
both allocation and import paths.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Make sure the DSI PHY_TIMING and BTA_TIMING registers are initialized
when the clocks are set up as opposed to when the output is enabled.
This makes sure that the PHY timings are properly set up when the panel
is prepared and that DCS commands sent at that time use the appropriate
timings.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Add support for sending MIPI DSI command packets from the host to a
peripheral. This is required for panels that need configuration before
they accept video data.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Implement ganged mode support for the Tegra DSI driver. The DSI host
controller to gang up with is specified via a phandle in the device tree
and the resolved DSI host controller used for the programming of the
ganged-mode registers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In preparation for adding ganged-mode support, this commit splits out
the tegra_dsi_set_timeout() function so that it can be reused for the
slave DSI controller.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Add support for DC-driven command mode. This is a mode where the video
stream sent by the display controller is packed into DCS command packets
(write_memory_start and write_memory_continue) by the DSI controller. It
can be used for panels with a remote framebuffer and is useful to save
power when used with a dynamic refresh rate (not yet supported by the
driver).
Signed-off-by: Thierry Reding <treding@nvidia.com>
For command mode panels, the DSI controller needs to be enabled and
configured so that panel drivers can send commands prior to the video
stream being enabled.
Move code from the monolithic output enable/disable functions into
smaller, reusable units to allow more fine-grained control over the
controller state.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The driver wasn't even attempting to do any cleanup when probing failed.
Fix this by releasing any resources acquired up to the point of failure
and putting the device back into the original state (reset, clocks off).
Signed-off-by: Thierry Reding <treding@nvidia.com>
DSI panels can always be hotplugged via the DSI bus' attach/detach
infrastructure, so unconditionally mark the connector hotpluggable.
While at it, also make sure that when a panel is detached the connector
is marked unconnected before calling into the DRM hotplug helpers to
reflect the correct state.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The common clock framework will take care of preparing and enabling the
parent of the DSI clock automatically.
Signed-off-by: Thierry Reding <treding@nvidia.com>
In preparation for supporting command mode panels, don't disable the
clock when the output is disabled. The output will be enabled only after
the panel has been programmed in command mode, so the clock must always
remain on.
As a side-effect, pad calibration now only needs to be done at driver
probe time, since neither power nor controller state will go away before
driver removal. While at it, use a 32-bit variable to store register
content because the registers are 32-bit even on 64-bit Tegra.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Rather than hardcoding them as macros, make the host and video FIFO
depths parameters so that they can be more easily adjusted if a new
generation of the Tegra SoC changes them.
While at it, set the depth of the video FIFO to the correct value of
1920 *words* rather than *bytes*.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Previously the panel and output were only enabled on encoder->dpms(). If
userspace called dpms on before doing a modeset, the driver would get into
a state where the connector had a dpms state of ON, but the encoder and output
were not enabled (because the encoder is not yet attached to the connector).
Subsequent dpms ON calls are ignored b/c the connector's state already matches
the desired state.
This patch enables/disables the panel and output on modeset as well, so we
can catch the above case.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Both display controllers are in their own power partition. Currently the
driver relies on the assumption that these partitions are on (which is
the hardware default). However some bootloaders may disable them, so the
driver must make sure to turn them back on to avoid hangs.
Signed-off-by: Thierry Reding <treding@nvidia.com>
During calibration, sets the "internal reference level for drive pull-
down" to the value specified in the Tegra TRM.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Include the clock lanes when calibrating the MIPI PHY on Tegra124
compatible devices.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
[treding@nvidia.com: bikeshedding]
Signed-off-by: Thierry Reding <treding@nvidia.com>
By paving the CTRL reg value, the current code changes MIPI_CAL_PRESCALE
("Auto-cal calibration step prescale") from 1us to 0.1us (val=0). In the
description for PHY's noise filter (MIPI_CAL_NOISE_FLT), the TRM states
that if the value of the prescale is 0 (or 0.1us), the filter should be
set between 2-5. However, the current code sets it to 0.
For now, let's keep the prescale and filter values as-is, which is most
likely the power-on-reset values of 0x2 and 0xa, respectively.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
On 64-bit platforms an unsigned long would be 64 bit and cause
unnecessary casting when being passed to writel() or returned from
readl(). Make register values 32 bits wide to avoid that.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Use the u32 type for the offset in the host1x_job_gather structure for
consistentcy with other structures. Negative offsets don't make sense in
this context.
Signed-off-by: Thierry Reding <treding@nvidia.com>
This reduces the amount of casting that needs to be done to get rid of
annoying warnings on 64-bit builds.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Rather than cast to a u32 use the struct host1x_bo pointers directly.
This avoid annoying warnings for 64-bit builds.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The introduction of the COMPILE_TEST dependency in commit 158b50aefa
(drm/tegra: Increase compile test coverage) removes the dependency on
COMMON_CLK (implicitly selected via ARCH_TEGRA, ARCH_MULTI_V7 and
ARCH_MULTIPLATFORM).
Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
struct mipi_dsi_msg is a read-only structure, drivers should never need
to modify it. Make this explicit by making all references to the struct
const.
Acked-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Currently the mipi_dsi_dcs_write() function requires the DCS command
byte to be embedded within the write buffer whereas mipi_dsi_dcs_read()
has a separate parameter. Make them more symmetrical by adding an extra
command parameter to mipi_dsi_dcs_write().
The S6E8AA0 driver relies on the old asymmetric API and there's concern
that moving to the new API may be less efficient. Provide a new function
with the old semantics for those cases and make the S6E8AA0 driver use
it instead.
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
A common pattern is starting to emerge for higher level transfer
helpers. Create a new helper that encapsulates this pattern and avoids
code duplication.
Acked-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
This commit introduces a new function, mipi_dsi_create_packet(), which
converts from a MIPI DSI message to a MIPI DSI packet. The MIPI DSI
packet is as close to the protocol described in the DSI specification as
possible and useful in drivers that need to write a DSI packet into a
FIFO to send a message off to the peripheral.
Suggested-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Add two helpers, mipi_dsi_packet_format_is_{short,long}(), that help in
determining the format of a packet.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The %* format specifier expects an integer, which works fine with size_t
arguments on 32-bit because the types match. However on 64-bit, size_t
is typedef'd to unsigned long and will cause a build warning.
Signed-off-by: Thierry Reding <treding@nvidia.com>