drm-misc-next for 5.7:

UAPI Changes:
 
 Cross-subsystem Changes:
 
 Core Changes:
 
 Driver Changes:
  - fb-helper: Remove drm_fb_helper_{add,add_all,remove}_one_connector
  - fbdev: some cleanups and dead-code removal
  - Conversions to simple-encoder
  - zero-length array removal
  - Panel: panel-dpi support in panel-simple, Novatek NT35510, Elida
    KD35T133,
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCXmZKhwAKCRDj7w1vZxhR
 xUgxAQDB1kkf1xQdU7rdw344vaaMf270qBeG+GNX/py3h9pbnwEA7XQvbB1wWBec
 hR629PO+csE0dWcFkGi8d5kpdWQCOQY=
 =PRn3
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2020-03-09' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.7:

UAPI Changes:

Cross-subsystem Changes:

Core Changes:

Driver Changes:
 - fb-helper: Remove drm_fb_helper_{add,add_all,remove}_one_connector
 - fbdev: some cleanups and dead-code removal
 - Conversions to simple-encoder
 - zero-length array removal
 - Panel: panel-dpi support in panel-simple, Novatek NT35510, Elida
   KD35T133,

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <maxime@cerno.tech>
Link: https://patchwork.freedesktop.org/patch/msgid/20200309135439.dicfnbo4ikj4tkz7@gilmour
This commit is contained in:
Dave Airlie 2020-03-12 12:42:41 +10:00
commit 9e12da086e
94 changed files with 3016 additions and 922 deletions

View File

@ -1,123 +1 @@
display-timing bindings
=======================
display-timings node
--------------------
required properties:
- none
optional properties:
- native-mode: The native mode for the display, in case multiple modes are
provided. When omitted, assume the first node is the native.
timing subnode
--------------
required properties:
- hactive, vactive: display resolution
- hfront-porch, hback-porch, hsync-len: horizontal display timing parameters
in pixels
vfront-porch, vback-porch, vsync-len: vertical display timing parameters in
lines
- clock-frequency: display clock in Hz
optional properties:
- hsync-active: hsync pulse is active low/high/ignored
- vsync-active: vsync pulse is active low/high/ignored
- de-active: data-enable pulse is active low/high/ignored
- pixelclk-active: with
- active high = drive pixel data on rising edge/
sample data on falling edge
- active low = drive pixel data on falling edge/
sample data on rising edge
- ignored = ignored
- syncclk-active: with
- active high = drive sync on rising edge/
sample sync on falling edge of pixel
clock
- active low = drive sync on falling edge/
sample sync on rising edge of pixel
clock
- omitted = same configuration as pixelclk-active
- interlaced (bool): boolean to enable interlaced mode
- doublescan (bool): boolean to enable doublescan mode
- doubleclk (bool): boolean to enable doubleclock mode
All the optional properties that are not bool follow the following logic:
<1>: high active
<0>: low active
omitted: not used on hardware
There are different ways of describing the capabilities of a display. The
devicetree representation corresponds to the one commonly found in datasheets
for displays. If a display supports multiple signal timings, the native-mode
can be specified.
The parameters are defined as:
+----------+-------------------------------------+----------+-------+
| | ^ | | |
| | |vback_porch | | |
| | v | | |
+----------#######################################----------+-------+
| # ^ # | |
| # | # | |
| hback # | # hfront | hsync |
| porch # | hactive # porch | len |
|<-------->#<-------+--------------------------->#<-------->|<----->|
| # | # | |
| # |vactive # | |
| # | # | |
| # v # | |
+----------#######################################----------+-------+
| | ^ | | |
| | |vfront_porch | | |
| | v | | |
+----------+-------------------------------------+----------+-------+
| | ^ | | |
| | |vsync_len | | |
| | v | | |
+----------+-------------------------------------+----------+-------+
Note: In addition to being used as subnode(s) of display-timings, the timing
subnode may also be used on its own. This is appropriate if only one mode
need be conveyed. In this case, the node should be named 'panel-timing'.
Example:
display-timings {
native-mode = <&timing0>;
timing0: 1080p24 {
/* 1920x1080p24 */
clock-frequency = <52000000>;
hactive = <1920>;
vactive = <1080>;
hfront-porch = <25>;
hback-porch = <25>;
hsync-len = <25>;
vback-porch = <2>;
vfront-porch = <2>;
vsync-len = <2>;
hsync-active = <1>;
};
};
Every required property also supports the use of ranges, so the commonly used
datasheet description with minimum, typical and maximum values can be used.
Example:
timing1: timing {
/* 1920x1080p24 */
clock-frequency = <148500000>;
hactive = <1920>;
vactive = <1080>;
hsync-len = <0 44 60>;
hfront-porch = <80 88 95>;
hback-porch = <100 148 160>;
vfront-porch = <0 4 6>;
vback-porch = <0 36 50>;
vsync-len = <0 5 6>;
};
See display-timings.yaml in this directory.

View File

@ -0,0 +1,77 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/display-timings.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: display timing bindings
maintainers:
- Thierry Reding <thierry.reding@gmail.com>
- Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
- Sam Ravnborg <sam@ravnborg.org>
description: |
A display panel may be able to handle several display timings,
with different resolutions.
The display-timings node makes it possible to specify the timing
and to specify the timing that is native for the display.
properties:
$nodename:
const: display-timings
native-mode:
$ref: /schemas/types.yaml#/definitions/phandle
description: |
The default display timing is the one specified as native-mode.
If no native-mode is specified then the first node is assumed the
native mode.
patternProperties:
"^timing":
type: object
allOf:
- $ref: panel-timing.yaml#
additionalProperties: false
examples:
- |+
/*
* Example that specifies panel timing using minimum, typical,
* maximum values as commonly used in datasheet description.
* timing1 is the native-mode.
*/
display-timings {
native-mode = <&timing1>;
timing0 {
/* 1920x1080p24 */
clock-frequency = <148500000>;
hactive = <1920>;
vactive = <1080>;
hsync-len = <0 44 60>;
hfront-porch = <80 88 95>;
hback-porch = <100 148 160>;
vfront-porch = <0 4 6>;
vback-porch = <0 36 50>;
vsync-len = <0 5 6>;
};
timing1 {
/* 1920x1080p24 */
clock-frequency = <52000000>;
hactive = <1920>;
vactive = <1080>;
hfront-porch = <25>;
hback-porch = <25>;
hsync-len = <0 25 25>;
vback-porch = <2>;
vfront-porch = <2>;
vsync-len = <2>;
hsync-active = <1>;
pixelclk-active = <1>;
};
};
...

View File

@ -0,0 +1,49 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/elida,kd35t133.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Elida KD35T133 3.5in 320x480 DSI panel
maintainers:
- Heiko Stuebner <heiko.stuebner@theobroma-systems.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: elida,kd35t133
reg: true
backlight: true
reset-gpios: true
iovcc-supply:
description: regulator that supplies the iovcc voltage
vdd-supply:
description: regulator that supplies the vdd voltage
required:
- compatible
- reg
- backlight
- iovcc-supply
- vdd-supply
additionalProperties: false
examples:
- |
dsi@ff450000 {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "elida,kd35t133";
reg = <0>;
backlight = <&backlight>;
iovcc-supply = <&vcc_1v8>;
vdd-supply = <&vcc3v3_lcd>;
};
};
...

View File

@ -0,0 +1,56 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/novatek,nt35510.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Novatek NT35510-based display panels
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
items:
- const: hydis,hva40wv1
- const: novatek,nt35510
description: This indicates the panel manufacturer of the panel
that is in turn using the NT35510 panel driver. The compatible
string determines how the NT35510 panel driver shall be configured
to work with the indicated panel. The novatek,nt35510 compatible shall
always be provided as a fallback.
reg: true
reset-gpios: true
vdd-supply:
description: regulator that supplies the vdd voltage
vddi-supply:
description: regulator that supplies the vddi voltage
backlight: true
required:
- compatible
- reg
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi@a0351000 {
#address-cells = <1>;
#size-cells = <0>;
panel {
compatible = "hydis,hva40wv1", "novatek,nt35510";
reg = <0>;
vdd-supply = <&ab8500_ldo_aux4_reg>;
vddi-supply = <&ab8500_ldo_aux6_reg>;
reset-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;
backlight = <&gpio_bl>;
};
};
...

View File

@ -54,13 +54,20 @@ properties:
# Display Timings
panel-timing:
type: object
description:
Most display panels are restricted to a single resolution and
require specific display timings. The panel-timing subnode expresses those
timings as specified in the timing subnode section of the display timing
bindings defined in
Documentation/devicetree/bindings/display/panel/display-timing.txt.
timings.
allOf:
- $ref: panel-timing.yaml#
display-timings:
description:
Some display panels supports several resolutions with different timing.
The display-timings bindings supports specifying several timings and
optional specify which is the native mode.
allOf:
- $ref: display-timings.yaml#
# Connectivity
port:

View File

@ -1,50 +0,0 @@
Generic MIPI DPI Panel
======================
Required properties:
- compatible: "panel-dpi"
Optional properties:
- label: a symbolic name for the panel
- enable-gpios: panel enable gpio
- reset-gpios: GPIO to control the RESET pin
- vcc-supply: phandle of regulator that will be used to enable power to the display
- backlight: phandle of the backlight device
Required nodes:
- "panel-timing" containing video timings
(Documentation/devicetree/bindings/display/panel/display-timing.txt)
- Video port for DPI input
Example
-------
lcd0: display@0 {
compatible = "samsung,lte430wq-f0c", "panel-dpi";
label = "lcd";
backlight = <&backlight>;
port {
lcd_in: endpoint {
remote-endpoint = <&dpi_out>;
};
};
panel-timing {
clock-frequency = <9200000>;
hactive = <480>;
vactive = <272>;
hfront-porch = <8>;
hback-porch = <4>;
hsync-len = <41>;
vback-porch = <2>;
vfront-porch = <4>;
vsync-len = <10>;
hsync-active = <0>;
vsync-active = <0>;
de-active = <1>;
pixelclk-active = <1>;
};
};

View File

@ -0,0 +1,81 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/panel-dpi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Generic MIPI DPI Panel
maintainers:
- Sam Ravnborg <sam@ravnborg.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
description:
Shall contain a panel specific compatible and "panel-dpi"
in that order.
items:
- {}
- const: panel-dpi
data-mapping:
enum:
- rgb24
- rgb565
- bgr666
description: |
Describes the media format, how the display panel is connected
to the display interface.
backlight: true
enable-gpios: true
height-mm: true
label: true
panel-timing: true
port: true
power-supply: true
reset-gpios: true
width-mm: true
required:
- panel-timing
- power-supply
additionalProperties: false
examples:
- |
panel@0 {
compatible = "osddisplays,osd057T0559-34ts", "panel-dpi";
label = "osddisplay";
power-supply = <&vcc_supply>;
data-mapping = "rgb565";
backlight = <&backlight>;
port {
lcd_in: endpoint {
remote-endpoint = <&dpi_out>;
};
};
panel-timing {
clock-frequency = <9200000>;
hactive = <800>;
vactive = <480>;
hfront-porch = <8>;
hback-porch = <4>;
hsync-len = <41>;
vback-porch = <2>;
vfront-porch = <4>;
vsync-len = <10>;
hsync-active = <0>;
vsync-active = <0>;
de-active = <1>;
pixelclk-active = <1>;
};
};
...

View File

@ -177,6 +177,8 @@ properties:
- nec,nl4827hc19-05b
# Netron-DY E231732 7.0" WSVGA TFT LCD panel
- netron-dy,e231732
# NewEast Optoelectronics CO., LTD WJFH116008A eDP TFT LCD panel
- neweast,wjfh116008a
# Newhaven Display International 480 x 272 TFT LCD panel
- newhaven,nhd-4.3-480272ef-atxl
# NLT Technologies, Ltd. 15.6" FHD (1920x1080) LVDS TFT LCD panel

View File

@ -0,0 +1,227 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/panel-timing.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: panel timing bindings
maintainers:
- Thierry Reding <thierry.reding@gmail.com>
- Sam Ravnborg <sam@ravnborg.org>
description: |
There are different ways of describing the timing data of a panel. The
devicetree representation corresponds to the one commonly found in datasheets
for panels.
The parameters are defined as seen in the following illustration.
+----------+-------------------------------------+----------+-------+
| | ^ | | |
| | |vback_porch | | |
| | v | | |
+----------#######################################----------+-------+
| # ^ # | |
| # | # | |
| hback # | # hfront | hsync |
| porch # | hactive # porch | len |
|<-------->#<-------+--------------------------->#<-------->|<----->|
| # | # | |
| # |vactive # | |
| # | # | |
| # v # | |
+----------#######################################----------+-------+
| | ^ | | |
| | |vfront_porch | | |
| | v | | |
+----------+-------------------------------------+----------+-------+
| | ^ | | |
| | |vsync_len | | |
| | v | | |
+----------+-------------------------------------+----------+-------+
The following is the panel timings shown with time on the x-axis.
This matches the timing diagrams often found in data sheets.
Active Front Sync Back
Region Porch Porch
<-----------------------><----------------><-------------><-------------->
//////////////////////|
////////////////////// |
////////////////////// |.................. ................
_______________
Timing can be specified either as a typical value or as a tuple
of min, typ, max values.
properties:
clock-frequency:
description: Panel clock in Hz
hactive:
$ref: /schemas/types.yaml#/definitions/uint32
description: Horizontal panel resolution in pixels
vactive:
$ref: /schemas/types.yaml#/definitions/uint32
description: Vertical panel resolution in pixels
hfront-porch:
description: Horizontal front porch panel timing
oneOf:
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
items:
description: typical number of pixels
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- minItems: 3
maxItems: 3
items:
description: min, typ, max number of pixels
hback-porch:
description: Horizontal back porch timing
oneOf:
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
items:
description: typical number of pixels
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- minItems: 3
maxItems: 3
items:
description: min, typ, max number of pixels
hsync-len:
description: Horizontal sync length panel timing
oneOf:
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
items:
description: typical number of pixels
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- minItems: 3
maxItems: 3
items:
description: min, typ, max number of pixels
vfront-porch:
description: Vertical front porch panel timing
oneOf:
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
items:
description: typical number of lines
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- minItems: 3
maxItems: 3
items:
description: min, typ, max number of lines
vback-porch:
description: Vertical back porch panel timing
oneOf:
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
items:
description: typical number of lines
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- minItems: 3
maxItems: 3
items:
description: min, typ, max number of lines
vsync-len:
description: Vertical sync length panel timing
oneOf:
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
items:
description: typical number of lines
- allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- minItems: 3
maxItems: 3
items:
description: min, typ, max number of lines
hsync-active:
description: |
Horizontal sync pulse.
0 selects active low, 1 selects active high.
If omitted then it is not used by the hardware
enum: [0, 1]
vsync-active:
description: |
Vertical sync pulse.
0 selects active low, 1 selects active high.
If omitted then it is not used by the hardware
enum: [0, 1]
de-active:
description: |
Data enable.
0 selects active low, 1 selects active high.
If omitted then it is not used by the hardware
enum: [0, 1]
pixelclk-active:
description: |
Data driving on rising or falling edge.
Use 0 to drive pixel data on falling edge and
sample data on rising edge.
Use 1 to drive pixel data on rising edge and
sample data on falling edge
enum: [0, 1]
syncclk-active:
description: |
Drive sync on rising or sample sync on falling edge.
If not specified then the setup is as specified by pixelclk-active.
Use 0 to drive sync on falling edge and
sample sync on rising edge of pixel clock.
Use 1 to drive sync on rising edge and
sample sync on falling edge of pixel clock
enum: [0, 1]
interlaced:
type: boolean
description: Enable interlaced mode
doublescan:
type: boolean
description: Enable double scan mode
doubleclk:
type: boolean
description: Enable double clock mode
required:
- clock-frequency
- hactive
- vactive
- hfront-porch
- hback-porch
- hsync-len
- vfront-porch
- vback-porch
- vsync-len
additionalProperties: false
...

View File

@ -1,19 +0,0 @@
Rockchip DRM master device
================================
The Rockchip DRM master device is a virtual device needed to list all
vop devices or other display interface nodes that comprise the
graphics subsystem.
Required properties:
- compatible: Should be "rockchip,display-subsystem"
- ports: Should contain a list of phandles pointing to display interface port
of vop devices. vop definitions as defined in
Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt
example:
display-subsystem {
compatible = "rockchip,display-subsystem";
ports = <&vopl_out>, <&vopb_out>;
};

View File

@ -0,0 +1,40 @@
# SPDX-License-Identifier: (GPL-2.0-only)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/rockchip/rockchip-drm.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip DRM master device
maintainers:
- Sandy Huang <hjc@rock-chips.com>
- Heiko Stuebner <heiko@sntech.de>
description: |
The Rockchip DRM master device is a virtual device needed to list all
vop devices or other display interface nodes that comprise the
graphics subsystem.
properties:
compatible:
const: rockchip,display-subsystem
ports:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: |
Should contain a list of phandles pointing to display interface port
of vop devices. vop definitions as defined in
Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt
required:
- compatible
- ports
additionalProperties: false
examples:
- |
display-subsystem {
compatible = "rockchip,display-subsystem";
ports = <&vopl_out>, <&vopb_out>;
};

View File

@ -425,6 +425,8 @@ patternProperties:
description: Shenzhen Hugsun Technology Co. Ltd.
"^hwacom,.*":
description: HwaCom Systems Inc.
"^hydis,.*":
description: Hydis Technologies
"^hyundai,.*":
description: Hyundai Technology
"^i2se,.*":
@ -665,6 +667,8 @@ patternProperties:
description: Netron DY
"^netxeon,.*":
description: Shenzhen Netxeon Technology CO., LTD
"^neweast,.*":
description: Guangdong Neweast Optoelectronics CO., LTD
"^nexbox,.*":
description: Nexbox
"^nextthing,.*":

View File

@ -359,23 +359,6 @@ Contact: Sean Paul
Level: Starter
drm_fb_helper tasks
-------------------
- drm_fb_helper_restore_fbdev_mode_unlocked() should call restore_fbdev_mode()
not the _force variant so it can bail out if there is a master. But first
these igt tests need to be fixed: kms_fbcon_fbt@psr and
kms_fbcon_fbt@psr-suspend.
- The max connector argument for drm_fb_helper_init() isn't used anymore and
can be removed.
- The helper doesn't keep an array of connectors anymore so these can be
removed: drm_fb_helper_single_add_all_connectors(),
drm_fb_helper_add_one_connector() and drm_fb_helper_remove_one_connector().
Level: Intermediate
connector register/unregister fixes
-----------------------------------

View File

@ -5022,7 +5022,7 @@ L: dri-devel@lists.freedesktop.org
L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
F: drivers/dma-buf/
F: include/linux/dma-buf*
F: include/linux/reservation.h
F: include/linux/dma-resv.h
F: include/linux/*fence.h
F: Documentation/driver-api/dma-buf.rst
K: dma_(buf|fence|resv)
@ -5336,6 +5336,13 @@ F: drivers/gpu/drm/msm/
F: include/uapi/drm/msm_drm.h
F: Documentation/devicetree/bindings/display/msm/
DRM DRIVER FOR NOVATEK NT35510 PANELS
M: Linus Walleij <linus.walleij@linaro.org>
T: git git://anongit.freedesktop.org/drm/drm-misc
S: Maintained
F: drivers/gpu/drm/panel/panel-novatek-nt35510.c
F: Documentation/devicetree/bindings/display/panel/novatek,nt35510.yaml
DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS
M: Ben Skeggs <bskeggs@redhat.com>
L: dri-devel@lists.freedesktop.org

View File

@ -39,6 +39,16 @@ config UDMABUF
A driver to let userspace turn memfd regions into dma-bufs.
Qemu can use this to create host dmabufs for guest framebuffers.
config DMABUF_MOVE_NOTIFY
bool "Move notify between drivers (EXPERIMENTAL)"
default n
help
Don''t pin buffers if the dynamic DMA-buf interface is available on both the
exporter as well as the importer. This fixes a security problem where
userspace is able to pin unrestricted amounts of memory through DMA-buf.
But marked experimental because we don''t jet have a consistent execution
context and memory management between drivers.
config DMABUF_SELFTESTS
tristate "Selftests for the dma-buf interfaces"
default n

View File

@ -525,7 +525,10 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
}
if (WARN_ON(exp_info->ops->cache_sgt_mapping &&
exp_info->ops->dynamic_mapping))
(exp_info->ops->pin || exp_info->ops->unpin)))
return ERR_PTR(-EINVAL);
if (WARN_ON(!exp_info->ops->pin != !exp_info->ops->unpin))
return ERR_PTR(-EINVAL);
if (!try_module_get(exp_info->owner))
@ -652,7 +655,8 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
* calls attach() of dma_buf_ops to allow device-specific attach functionality
* @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached.
* @dynamic_mapping: [in] calling convention for map/unmap
* @importer_ops [in] importer operations for the attachment
* @importer_priv [in] importer private pointer for the attachment
*
* Returns struct dma_buf_attachment pointer for this attachment. Attachments
* must be cleaned up by calling dma_buf_detach().
@ -668,7 +672,8 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
*/
struct dma_buf_attachment *
dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
bool dynamic_mapping)
const struct dma_buf_attach_ops *importer_ops,
void *importer_priv)
{
struct dma_buf_attachment *attach;
int ret;
@ -676,13 +681,17 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
if (WARN_ON(!dmabuf || !dev))
return ERR_PTR(-EINVAL);
if (WARN_ON(importer_ops && !importer_ops->move_notify))
return ERR_PTR(-EINVAL);
attach = kzalloc(sizeof(*attach), GFP_KERNEL);
if (!attach)
return ERR_PTR(-ENOMEM);
attach->dev = dev;
attach->dmabuf = dmabuf;
attach->dynamic_mapping = dynamic_mapping;
attach->importer_ops = importer_ops;
attach->importer_priv = importer_priv;
if (dmabuf->ops->attach) {
ret = dmabuf->ops->attach(dmabuf, attach);
@ -701,15 +710,19 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
dma_buf_is_dynamic(dmabuf)) {
struct sg_table *sgt;
if (dma_buf_is_dynamic(attach->dmabuf))
if (dma_buf_is_dynamic(attach->dmabuf)) {
dma_resv_lock(attach->dmabuf->resv, NULL);
ret = dma_buf_pin(attach);
if (ret)
goto err_unlock;
}
sgt = dmabuf->ops->map_dma_buf(attach, DMA_BIDIRECTIONAL);
if (!sgt)
sgt = ERR_PTR(-ENOMEM);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
goto err_unlock;
goto err_unpin;
}
if (dma_buf_is_dynamic(attach->dmabuf))
dma_resv_unlock(attach->dmabuf->resv);
@ -723,6 +736,10 @@ err_attach:
kfree(attach);
return ERR_PTR(ret);
err_unpin:
if (dma_buf_is_dynamic(attach->dmabuf))
dma_buf_unpin(attach);
err_unlock:
if (dma_buf_is_dynamic(attach->dmabuf))
dma_resv_unlock(attach->dmabuf->resv);
@ -743,7 +760,7 @@ EXPORT_SYMBOL_GPL(dma_buf_dynamic_attach);
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev)
{
return dma_buf_dynamic_attach(dmabuf, dev, false);
return dma_buf_dynamic_attach(dmabuf, dev, NULL, NULL);
}
EXPORT_SYMBOL_GPL(dma_buf_attach);
@ -766,8 +783,10 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
dmabuf->ops->unmap_dma_buf(attach, attach->sgt, attach->dir);
if (dma_buf_is_dynamic(attach->dmabuf))
if (dma_buf_is_dynamic(attach->dmabuf)) {
dma_buf_unpin(attach);
dma_resv_unlock(attach->dmabuf->resv);
}
}
dma_resv_lock(dmabuf->resv, NULL);
@ -780,6 +799,44 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
}
EXPORT_SYMBOL_GPL(dma_buf_detach);
/**
* dma_buf_pin - Lock down the DMA-buf
*
* @attach: [in] attachment which should be pinned
*
* Returns:
* 0 on success, negative error code on failure.
*/
int dma_buf_pin(struct dma_buf_attachment *attach)
{
struct dma_buf *dmabuf = attach->dmabuf;
int ret = 0;
dma_resv_assert_held(dmabuf->resv);
if (dmabuf->ops->pin)
ret = dmabuf->ops->pin(attach);
return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_pin);
/**
* dma_buf_unpin - Remove lock from DMA-buf
*
* @attach: [in] attachment which should be unpinned
*/
void dma_buf_unpin(struct dma_buf_attachment *attach)
{
struct dma_buf *dmabuf = attach->dmabuf;
dma_resv_assert_held(dmabuf->resv);
if (dmabuf->ops->unpin)
dmabuf->ops->unpin(attach);
}
EXPORT_SYMBOL_GPL(dma_buf_unpin);
/**
* dma_buf_map_attachment - Returns the scatterlist table of the attachment;
* mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
@ -799,6 +856,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
{
struct sg_table *sg_table;
int r;
might_sleep();
@ -820,13 +878,23 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
return attach->sgt;
}
if (dma_buf_is_dynamic(attach->dmabuf))
if (dma_buf_is_dynamic(attach->dmabuf)) {
dma_resv_assert_held(attach->dmabuf->resv);
if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) {
r = dma_buf_pin(attach);
if (r)
return ERR_PTR(r);
}
}
sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
if (!sg_table)
sg_table = ERR_PTR(-ENOMEM);
if (IS_ERR(sg_table) && dma_buf_is_dynamic(attach->dmabuf) &&
!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
dma_buf_unpin(attach);
if (!IS_ERR(sg_table) && attach->dmabuf->ops->cache_sgt_mapping) {
attach->sgt = sg_table;
attach->dir = direction;
@ -865,9 +933,33 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
dma_resv_assert_held(attach->dmabuf->resv);
attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction);
if (dma_buf_is_dynamic(attach->dmabuf) &&
!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
dma_buf_unpin(attach);
}
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
/**
* dma_buf_move_notify - notify attachments that DMA-buf is moving
*
* @dmabuf: [in] buffer which is moving
*
* Informs all attachmenst that they need to destroy and recreated all their
* mappings.
*/
void dma_buf_move_notify(struct dma_buf *dmabuf)
{
struct dma_buf_attachment *attach;
dma_resv_assert_held(dmabuf->resv);
list_for_each_entry(attach, &dmabuf->attachments, node)
if (attach->importer_ops)
attach->importer_ops->move_notify(attach);
}
EXPORT_SYMBOL_GPL(dma_buf_move_notify);
/**
* DOC: cpu access
*

View File

@ -54,9 +54,6 @@ config DRM_DEBUG_MM
If in doubt, say "N".
config DRM_EXPORT_FOR_TESTS
bool
config DRM_DEBUG_SELFTEST
tristate "kselftests for DRM"
depends on DRM
@ -470,6 +467,9 @@ config DRM_SAVAGE
endif # DRM_LEGACY
config DRM_EXPORT_FOR_TESTS
bool
# Separate option because drm_panel_orientation_quirks.c is shared with fbdev
config DRM_PANEL_ORIENTATION_QUIRKS
tristate

View File

@ -28,6 +28,7 @@
#include <linux/file.h>
#include <linux/pagemap.h>
#include <linux/sync_file.h>
#include <linux/dma-buf.h>
#include <drm/amdgpu_drm.h>
#include <drm/drm_syncobj.h>
@ -415,7 +416,9 @@ static int amdgpu_cs_bo_validate(struct amdgpu_cs_parser *p,
/* Don't move this buffer if we have depleted our allowance
* to move it. Don't move anything if the threshold is zero.
*/
if (p->bytes_moved < p->bytes_moved_threshold) {
if (p->bytes_moved < p->bytes_moved_threshold &&
(!bo->tbo.base.dma_buf ||
list_empty(&bo->tbo.base.dma_buf->attachments))) {
if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
(bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
/* And don't move a CPU_ACCESS_REQUIRED BO to limited

View File

@ -222,6 +222,37 @@ static void amdgpu_dma_buf_detach(struct dma_buf *dmabuf,
bo->prime_shared_count--;
}
/**
* amdgpu_dma_buf_pin - &dma_buf_ops.pin implementation
*
* @attach: attachment to pin down
*
* Pin the BO which is backing the DMA-buf so that it can't move any more.
*/
static int amdgpu_dma_buf_pin(struct dma_buf_attachment *attach)
{
struct drm_gem_object *obj = attach->dmabuf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
/* pin buffer into GTT */
return amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT);
}
/**
* amdgpu_dma_buf_unpin - &dma_buf_ops.unpin implementation
*
* @attach: attachment to unpin
*
* Unpin a previously pinned BO to make it movable again.
*/
static void amdgpu_dma_buf_unpin(struct dma_buf_attachment *attach)
{
struct drm_gem_object *obj = attach->dmabuf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
amdgpu_bo_unpin(bo);
}
/**
* amdgpu_dma_buf_map - &dma_buf_ops.map_dma_buf implementation
* @attach: DMA-buf attachment
@ -244,9 +275,19 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
struct sg_table *sgt;
long r;
r = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT);
if (r)
return ERR_PTR(r);
if (!bo->pin_count) {
/* move buffer into GTT */
struct ttm_operation_ctx ctx = { false, false };
amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT);
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
if (r)
return ERR_PTR(r);
} else if (!(amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type) &
AMDGPU_GEM_DOMAIN_GTT)) {
return ERR_PTR(-EBUSY);
}
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, bo->tbo.num_pages);
if (IS_ERR(sgt))
@ -277,13 +318,9 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach,
struct sg_table *sgt,
enum dma_data_direction dir)
{
struct drm_gem_object *obj = attach->dmabuf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
sg_free_table(sgt);
kfree(sgt);
amdgpu_bo_unpin(bo);
}
/**
@ -327,9 +364,10 @@ static int amdgpu_dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
}
const struct dma_buf_ops amdgpu_dmabuf_ops = {
.dynamic_mapping = true,
.attach = amdgpu_dma_buf_attach,
.detach = amdgpu_dma_buf_detach,
.pin = amdgpu_dma_buf_pin,
.unpin = amdgpu_dma_buf_unpin,
.map_dma_buf = amdgpu_dma_buf_map,
.unmap_dma_buf = amdgpu_dma_buf_unmap,
.release = drm_gem_dmabuf_release,
@ -412,6 +450,73 @@ error:
return ERR_PTR(ret);
}
/**
* amdgpu_dma_buf_move_notify - &attach.move_notify implementation
*
* @attach: the DMA-buf attachment
*
* Invalidate the DMA-buf attachment, making sure that the we re-create the
* mapping before the next use.
*/
static void
amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
{
struct drm_gem_object *obj = attach->importer_priv;
struct ww_acquire_ctx *ticket = dma_resv_locking_ctx(obj->resv);
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct ttm_operation_ctx ctx = { false, false };
struct ttm_placement placement = {};
struct amdgpu_vm_bo_base *bo_base;
int r;
if (bo->tbo.mem.mem_type == TTM_PL_SYSTEM)
return;
r = ttm_bo_validate(&bo->tbo, &placement, &ctx);
if (r) {
DRM_ERROR("Failed to invalidate DMA-buf import (%d))\n", r);
return;
}
for (bo_base = bo->vm_bo; bo_base; bo_base = bo_base->next) {
struct amdgpu_vm *vm = bo_base->vm;
struct dma_resv *resv = vm->root.base.bo->tbo.base.resv;
if (ticket) {
/* When we get an error here it means that somebody
* else is holding the VM lock and updating page tables
* So we can just continue here.
*/
r = dma_resv_lock(resv, ticket);
if (r)
continue;
} else {
/* TODO: This is more problematic and we actually need
* to allow page tables updates without holding the
* lock.
*/
if (!dma_resv_trylock(resv))
continue;
}
r = amdgpu_vm_clear_freed(adev, vm, NULL);
if (!r)
r = amdgpu_vm_handle_moved(adev, vm);
if (r && r != -EBUSY)
DRM_ERROR("Failed to invalidate VM page tables (%d))\n",
r);
dma_resv_unlock(resv);
}
}
static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = {
.move_notify = amdgpu_dma_buf_move_notify
};
/**
* amdgpu_gem_prime_import - &drm_driver.gem_prime_import implementation
* @dev: DRM device
@ -444,7 +549,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
if (IS_ERR(obj))
return obj;
attach = dma_buf_dynamic_attach(dma_buf, dev->dev, true);
attach = dma_buf_dynamic_attach(dma_buf, dev->dev,
&amdgpu_dma_buf_attach_ops, obj);
if (IS_ERR(attach)) {
drm_gem_object_put(obj);
return ERR_CAST(attach);

View File

@ -336,15 +336,12 @@ int amdgpu_fbdev_init(struct amdgpu_device *adev)
drm_fb_helper_prepare(adev->ddev, &rfbdev->helper,
&amdgpu_fb_helper_funcs);
ret = drm_fb_helper_init(adev->ddev, &rfbdev->helper,
AMDGPUFB_CONN_LIMIT);
ret = drm_fb_helper_init(adev->ddev, &rfbdev->helper);
if (ret) {
kfree(rfbdev);
return ret;
}
drm_fb_helper_single_add_all_connectors(&rfbdev->helper);
/* disable all the possible outputs/crtcs before entering KMS mode */
if (!amdgpu_device_has_dc_support(adev))
drm_helper_disable_unused_functions(adev->ddev);

View File

@ -31,6 +31,7 @@
*/
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/dma-buf.h>
#include <drm/amdgpu_drm.h>
#include <drm/drm_cache.h>
@ -925,6 +926,9 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
return 0;
}
if (bo->tbo.base.import_attach)
dma_buf_pin(bo->tbo.base.import_attach);
bo->flags |= AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
/* force to pin into visible video ram */
if (!(bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS))
@ -1008,6 +1012,9 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo)
amdgpu_bo_subtract_pin_size(bo);
if (bo->tbo.base.import_attach)
dma_buf_unpin(bo->tbo.base.import_attach);
for (i = 0; i < bo->placement.num_placement; i++) {
bo->placements[i].lpfn = 0;
bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
@ -1274,6 +1281,10 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
amdgpu_bo_kunmap(abo);
if (abo->tbo.base.dma_buf && !abo->tbo.base.import_attach &&
bo->mem.mem_type != TTM_PL_SYSTEM)
dma_buf_move_notify(abo->tbo.base.dma_buf);
/* remember the eviction */
if (evict)
atomic64_inc(&adev->num_evictions);

View File

@ -440,9 +440,6 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
static void dm_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
struct drm_connector *connector)
{
struct amdgpu_dm_connector *master = container_of(mgr, struct amdgpu_dm_connector, mst_mgr);
struct drm_device *dev = master->base.dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
DRM_INFO("DM_MST: Disabling connector: %p [id: %d] [master: %p]\n",
@ -458,21 +455,11 @@ static void dm_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
}
drm_connector_unregister(connector);
if (adev->mode_info.rfbdev)
drm_fb_helper_remove_one_connector(&adev->mode_info.rfbdev->helper, connector);
drm_connector_put(connector);
}
static void dm_dp_mst_register_connector(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
if (adev->mode_info.rfbdev)
drm_fb_helper_add_one_connector(&adev->mode_info.rfbdev->helper, connector);
else
DRM_ERROR("adev->mode_info.rfbdev is NULL\n");
drm_connector_register(connector);
}

View File

@ -129,18 +129,12 @@ int armada_fbdev_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, fbh, &armada_fb_helper_funcs);
ret = drm_fb_helper_init(dev, fbh, 1);
ret = drm_fb_helper_init(dev, fbh);
if (ret) {
DRM_ERROR("failed to initialize drm fb helper\n");
goto err_fb_helper;
}
ret = drm_fb_helper_single_add_all_connectors(fbh);
if (ret) {
DRM_ERROR("failed to add fb connectors\n");
goto err_fb_setup;
}
ret = drm_fb_helper_initial_config(fbh, 32);
if (ret) {
DRM_ERROR("failed to set initial config\n");

View File

@ -121,6 +121,7 @@ struct ast_private {
unsigned int next_index;
} cursor;
struct drm_encoder encoder;
struct drm_plane primary_plane;
struct drm_plane cursor_plane;
@ -238,13 +239,8 @@ struct ast_crtc {
u8 offset_x, offset_y;
};
struct ast_encoder {
struct drm_encoder base;
};
#define to_ast_crtc(x) container_of(x, struct ast_crtc, base)
#define to_ast_connector(x) container_of(x, struct ast_connector, base)
#define to_ast_encoder(x) container_of(x, struct ast_encoder, base)
struct ast_vbios_stdtable {
u8 misc;

View File

@ -40,6 +40,7 @@
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "ast_drv.h"
#include "ast_tables.h"
@ -957,28 +958,18 @@ err_kfree:
* Encoder
*/
static void ast_encoder_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
kfree(encoder);
}
static const struct drm_encoder_funcs ast_enc_funcs = {
.destroy = ast_encoder_destroy,
};
static int ast_encoder_init(struct drm_device *dev)
{
struct ast_encoder *ast_encoder;
struct ast_private *ast = dev->dev_private;
struct drm_encoder *encoder = &ast->encoder;
int ret;
ast_encoder = kzalloc(sizeof(struct ast_encoder), GFP_KERNEL);
if (!ast_encoder)
return -ENOMEM;
ret = drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_DAC);
if (ret)
return ret;
drm_encoder_init(dev, &ast_encoder->base, &ast_enc_funcs,
DRM_MODE_ENCODER_DAC, NULL);
encoder->possible_crtcs = 1;
ast_encoder->base.possible_crtcs = 1;
return 0;
}

View File

@ -719,14 +719,18 @@ static int anx6345_i2c_probe(struct i2c_client *client,
/* 1.2V digital core power regulator */
anx6345->dvdd12 = devm_regulator_get(dev, "dvdd12");
if (IS_ERR(anx6345->dvdd12)) {
DRM_ERROR("dvdd12-supply not found\n");
if (PTR_ERR(anx6345->dvdd12) != -EPROBE_DEFER)
DRM_ERROR("Failed to get dvdd12 supply (%ld)\n",
PTR_ERR(anx6345->dvdd12));
return PTR_ERR(anx6345->dvdd12);
}
/* 2.5V digital core power regulator */
anx6345->dvdd25 = devm_regulator_get(dev, "dvdd25");
if (IS_ERR(anx6345->dvdd25)) {
DRM_ERROR("dvdd25-supply not found\n");
if (PTR_ERR(anx6345->dvdd25) != -EPROBE_DEFER)
DRM_ERROR("Failed to get dvdd25 supply (%ld)\n",
PTR_ERR(anx6345->dvdd25));
return PTR_ERR(anx6345->dvdd25);
}

View File

@ -375,7 +375,6 @@ static int tc358764_attach(struct drm_bridge *bridge,
drm_connector_attach_encoder(&ctx->connector, bridge->encoder);
drm_panel_attach(ctx->panel, &ctx->connector);
ctx->connector.funcs->reset(&ctx->connector);
drm_fb_helper_add_one_connector(drm->fb_helper, &ctx->connector);
drm_connector_register(&ctx->connector);
return 0;
@ -384,10 +383,8 @@ static int tc358764_attach(struct drm_bridge *bridge,
static void tc358764_detach(struct drm_bridge *bridge)
{
struct tc358764 *ctx = bridge_to_tc358764(bridge);
struct drm_device *drm = bridge->dev;
drm_connector_unregister(&ctx->connector);
drm_fb_helper_remove_one_connector(drm->fb_helper, &ctx->connector);
drm_panel_detach(ctx->panel);
ctx->panel = NULL;
drm_connector_put(&ctx->connector);

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
// SPDX-License-Identifier: GPL-2.0 or MIT
/*
* Copyright 2018 Noralf Trønnes
*/

View File

@ -736,6 +736,10 @@ static bool drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *msg,
if (msg->curchunk_idx >= msg->curchunk_len) {
/* do CRC */
crc4 = drm_dp_msg_data_crc4(msg->chunk, msg->curchunk_len - 1);
if (crc4 != msg->chunk[msg->curchunk_len - 1])
print_hex_dump(KERN_DEBUG, "wrong crc",
DUMP_PREFIX_NONE, 16, 1,
msg->chunk, msg->curchunk_len, false);
/* copy chunk into bigger msg */
memcpy(&msg->msg[msg->curlen], msg->chunk, msg->curchunk_len - 1);
msg->curlen += msg->curchunk_len - 1;
@ -1035,7 +1039,8 @@ static bool drm_dp_sideband_parse_req(struct drm_dp_sideband_msg_rx *raw,
}
}
static int build_dpcd_write(struct drm_dp_sideband_msg_tx *msg, u8 port_num, u32 offset, u8 num_bytes, u8 *bytes)
static void build_dpcd_write(struct drm_dp_sideband_msg_tx *msg,
u8 port_num, u32 offset, u8 num_bytes, u8 *bytes)
{
struct drm_dp_sideband_msg_req_body req;
@ -1045,17 +1050,14 @@ static int build_dpcd_write(struct drm_dp_sideband_msg_tx *msg, u8 port_num, u32
req.u.dpcd_write.num_bytes = num_bytes;
req.u.dpcd_write.bytes = bytes;
drm_dp_encode_sideband_req(&req, msg);
return 0;
}
static int build_link_address(struct drm_dp_sideband_msg_tx *msg)
static void build_link_address(struct drm_dp_sideband_msg_tx *msg)
{
struct drm_dp_sideband_msg_req_body req;
req.req_type = DP_LINK_ADDRESS;
drm_dp_encode_sideband_req(&req, msg);
return 0;
}
static int build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
@ -1067,7 +1069,8 @@ static int build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
return 0;
}
static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg, int port_num)
static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg,
int port_num)
{
struct drm_dp_sideband_msg_req_body req;
@ -1078,10 +1081,11 @@ static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg, int por
return 0;
}
static int build_allocate_payload(struct drm_dp_sideband_msg_tx *msg, int port_num,
u8 vcpi, uint16_t pbn,
u8 number_sdp_streams,
u8 *sdp_stream_sink)
static void build_allocate_payload(struct drm_dp_sideband_msg_tx *msg,
int port_num,
u8 vcpi, uint16_t pbn,
u8 number_sdp_streams,
u8 *sdp_stream_sink)
{
struct drm_dp_sideband_msg_req_body req;
memset(&req, 0, sizeof(req));
@ -1094,11 +1098,10 @@ static int build_allocate_payload(struct drm_dp_sideband_msg_tx *msg, int port_n
number_sdp_streams);
drm_dp_encode_sideband_req(&req, msg);
msg->path_msg = true;
return 0;
}
static int build_power_updown_phy(struct drm_dp_sideband_msg_tx *msg,
int port_num, bool power_up)
static void build_power_updown_phy(struct drm_dp_sideband_msg_tx *msg,
int port_num, bool power_up)
{
struct drm_dp_sideband_msg_req_body req;
@ -1110,7 +1113,6 @@ static int build_power_updown_phy(struct drm_dp_sideband_msg_tx *msg,
req.u.port_num.port_number = port_num;
drm_dp_encode_sideband_req(&req, msg);
msg->path_msg = true;
return 0;
}
static int drm_dp_mst_assign_payload_id(struct drm_dp_mst_topology_mgr *mgr,
@ -2073,29 +2075,24 @@ ssize_t drm_dp_mst_dpcd_write(struct drm_dp_aux *aux,
offset, size, buffer);
}
static void drm_dp_check_mstb_guid(struct drm_dp_mst_branch *mstb, u8 *guid)
static int drm_dp_check_mstb_guid(struct drm_dp_mst_branch *mstb, u8 *guid)
{
int ret;
int ret = 0;
memcpy(mstb->guid, guid, 16);
if (!drm_dp_validate_guid(mstb->mgr, mstb->guid)) {
if (mstb->port_parent) {
ret = drm_dp_send_dpcd_write(
mstb->mgr,
mstb->port_parent,
DP_GUID,
16,
mstb->guid);
ret = drm_dp_send_dpcd_write(mstb->mgr,
mstb->port_parent,
DP_GUID, 16, mstb->guid);
} else {
ret = drm_dp_dpcd_write(
mstb->mgr->aux,
DP_GUID,
mstb->guid,
16);
ret = drm_dp_dpcd_write(mstb->mgr->aux,
DP_GUID, mstb->guid, 16);
}
}
return ret;
}
static void build_mst_prop_path(const struct drm_dp_mst_branch *mstb,
@ -2645,7 +2642,8 @@ static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
return false;
}
static int build_dpcd_read(struct drm_dp_sideband_msg_tx *msg, u8 port_num, u32 offset, u8 num_bytes)
static void build_dpcd_read(struct drm_dp_sideband_msg_tx *msg,
u8 port_num, u32 offset, u8 num_bytes)
{
struct drm_dp_sideband_msg_req_body req;
@ -2654,8 +2652,6 @@ static int build_dpcd_read(struct drm_dp_sideband_msg_tx *msg, u8 port_num, u32
req.u.dpcd_read.dpcd_address = offset;
req.u.dpcd_read.num_bytes = num_bytes;
drm_dp_encode_sideband_req(&req, msg);
return 0;
}
static int drm_dp_send_sideband_msg(struct drm_dp_mst_topology_mgr *mgr,
@ -2881,7 +2877,7 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_tx *txmsg;
struct drm_dp_link_address_ack_reply *reply;
struct drm_dp_mst_port *port, *tmp;
int i, len, ret, port_mask = 0;
int i, ret, port_mask = 0;
bool changed = false;
txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
@ -2889,7 +2885,7 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
return -ENOMEM;
txmsg->dst = mstb;
len = build_link_address(txmsg);
build_link_address(txmsg);
mstb->link_address_sent = true;
drm_dp_queue_down_tx(mgr, txmsg);
@ -2910,7 +2906,9 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
DRM_DEBUG_KMS("link address reply: %d\n", reply->nports);
drm_dp_dump_link_address(reply);
drm_dp_check_mstb_guid(mstb, reply->guid);
ret = drm_dp_check_mstb_guid(mstb, reply->guid);
if (ret)
goto out;
for (i = 0; i < reply->nports; i++) {
port_mask |= BIT(reply->ports[i].port_number);
@ -2951,14 +2949,14 @@ void drm_dp_send_clear_payload_id_table(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_branch *mstb)
{
struct drm_dp_sideband_msg_tx *txmsg;
int len, ret;
int ret;
txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
if (!txmsg)
return;
txmsg->dst = mstb;
len = build_clear_payload_id_table(txmsg);
build_clear_payload_id_table(txmsg);
drm_dp_queue_down_tx(mgr, txmsg);
@ -2976,7 +2974,6 @@ drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
{
struct drm_dp_enum_path_resources_ack_reply *path_res;
struct drm_dp_sideband_msg_tx *txmsg;
int len;
int ret;
txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
@ -2984,7 +2981,7 @@ drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
return -ENOMEM;
txmsg->dst = mstb;
len = build_enum_path_resources(txmsg, port->port_num);
build_enum_path_resources(txmsg, port->port_num);
drm_dp_queue_down_tx(mgr, txmsg);
@ -3068,7 +3065,7 @@ static int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr,
{
struct drm_dp_sideband_msg_tx *txmsg;
struct drm_dp_mst_branch *mstb;
int len, ret, port_num;
int ret, port_num;
u8 sinks[DRM_DP_MAX_SDP_STREAMS];
int i;
@ -3093,9 +3090,9 @@ static int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr,
sinks[i] = i;
txmsg->dst = mstb;
len = build_allocate_payload(txmsg, port_num,
id,
pbn, port->num_sdp_streams, sinks);
build_allocate_payload(txmsg, port_num,
id,
pbn, port->num_sdp_streams, sinks);
drm_dp_queue_down_tx(mgr, txmsg);
@ -3124,7 +3121,7 @@ int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_port *port, bool power_up)
{
struct drm_dp_sideband_msg_tx *txmsg;
int len, ret;
int ret;
port = drm_dp_mst_topology_get_port_validated(mgr, port);
if (!port)
@ -3137,7 +3134,7 @@ int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr,
}
txmsg->dst = port->parent;
len = build_power_updown_phy(txmsg, port->port_num, power_up);
build_power_updown_phy(txmsg, port->port_num, power_up);
drm_dp_queue_down_tx(mgr, txmsg);
ret = drm_dp_mst_wait_tx_reply(port->parent, txmsg);
@ -3359,7 +3356,6 @@ static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_port *port,
int offset, int size, u8 *bytes)
{
int len;
int ret = 0;
struct drm_dp_sideband_msg_tx *txmsg;
struct drm_dp_mst_branch *mstb;
@ -3374,7 +3370,7 @@ static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr,
goto fail_put;
}
len = build_dpcd_read(txmsg, port->port_num, offset, size);
build_dpcd_read(txmsg, port->port_num, offset, size);
txmsg->dst = port->parent;
drm_dp_queue_down_tx(mgr, txmsg);
@ -3412,7 +3408,6 @@ static int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_port *port,
int offset, int size, u8 *bytes)
{
int len;
int ret;
struct drm_dp_sideband_msg_tx *txmsg;
struct drm_dp_mst_branch *mstb;
@ -3427,7 +3422,7 @@ static int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,
goto fail_put;
}
len = build_dpcd_write(txmsg, port->port_num, offset, size, bytes);
build_dpcd_write(txmsg, port->port_num, offset, size, bytes);
txmsg->dst = mstb;
drm_dp_queue_down_tx(mgr, txmsg);
@ -3673,7 +3668,12 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");
goto out_fail;
}
drm_dp_check_mstb_guid(mgr->mst_primary, guid);
ret = drm_dp_check_mstb_guid(mgr->mst_primary, guid);
if (ret) {
DRM_DEBUG_KMS("check mstb failed - undocked during suspend?\n");
goto out_fail;
}
/*
* For the final step of resuming the topology, we need to bring the
@ -4615,15 +4615,34 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
int ret;
ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf, DP_RECEIVER_CAP_SIZE);
if (ret) {
seq_printf(m, "dpcd read failed\n");
goto out;
}
seq_printf(m, "dpcd: %*ph\n", DP_RECEIVER_CAP_SIZE, buf);
ret = drm_dp_dpcd_read(mgr->aux, DP_FAUX_CAP, buf, 2);
if (ret) {
seq_printf(m, "faux/mst read failed\n");
goto out;
}
seq_printf(m, "faux/mst: %*ph\n", 2, buf);
ret = drm_dp_dpcd_read(mgr->aux, DP_MSTM_CTRL, buf, 1);
if (ret) {
seq_printf(m, "mst ctrl read failed\n");
goto out;
}
seq_printf(m, "mst ctrl: %*ph\n", 1, buf);
/* dump the standard OUI branch header */
ret = drm_dp_dpcd_read(mgr->aux, DP_BRANCH_OUI, buf, DP_BRANCH_OUI_HEADER_SIZE);
if (ret) {
seq_printf(m, "branch oui read failed\n");
goto out;
}
seq_printf(m, "branch oui: %*phN devid: ", 3, buf);
for (i = 0x3; i < 0x8 && buf[i]; i++)
seq_printf(m, "%c", buf[i]);
seq_printf(m, " revision: hw: %x.%x sw: %x.%x\n",
@ -4632,6 +4651,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
seq_printf(m, "payload table: %*ph\n", DP_PAYLOAD_TABLE_SIZE, buf);
}
out:
mutex_unlock(&mgr->lock);
}

View File

@ -450,7 +450,6 @@ EXPORT_SYMBOL(drm_fb_helper_prepare);
* drm_fb_helper_init - initialize a &struct drm_fb_helper
* @dev: drm device
* @fb_helper: driver-allocated fbdev helper structure to initialize
* @max_conn_count: max connector count (not used)
*
* This allocates the structures for the fbdev helper with the given limits.
* Note that this won't yet touch the hardware (through the driver interfaces)
@ -463,8 +462,7 @@ EXPORT_SYMBOL(drm_fb_helper_prepare);
* Zero if everything went ok, nonzero otherwise.
*/
int drm_fb_helper_init(struct drm_device *dev,
struct drm_fb_helper *fb_helper,
int max_conn_count)
struct drm_fb_helper *fb_helper)
{
int ret;
@ -2125,7 +2123,7 @@ static int drm_fbdev_client_hotplug(struct drm_client_dev *client)
drm_fb_helper_prepare(dev, fb_helper, &drm_fb_helper_generic_funcs);
ret = drm_fb_helper_init(dev, fb_helper, 0);
ret = drm_fb_helper_init(dev, fb_helper);
if (ret)
goto err;

View File

@ -23,14 +23,6 @@
#include "drm_internal.h"
static struct hdcp_srm {
u32 revoked_ksv_cnt;
u8 *revoked_ksv_list;
/* Mutex to protect above struct member */
struct mutex mutex;
} *srm_data;
static inline void drm_hdcp_print_ksv(const u8 *ksv)
{
DRM_DEBUG("\t%#02x, %#02x, %#02x, %#02x, %#02x\n",
@ -60,11 +52,11 @@ static u32 drm_hdcp_get_revoked_ksv_count(const u8 *buf, u32 vrls_length)
return ksv_count;
}
static u32 drm_hdcp_get_revoked_ksvs(const u8 *buf, u8 *revoked_ksv_list,
static u32 drm_hdcp_get_revoked_ksvs(const u8 *buf, u8 **revoked_ksv_list,
u32 vrls_length)
{
u32 parsed_bytes = 0, ksv_count = 0;
u32 vrl_ksv_cnt, vrl_ksv_sz, vrl_idx = 0;
u32 parsed_bytes = 0, ksv_count = 0;
do {
vrl_ksv_cnt = *buf;
@ -74,10 +66,10 @@ static u32 drm_hdcp_get_revoked_ksvs(const u8 *buf, u8 *revoked_ksv_list,
DRM_DEBUG("vrl: %d, Revoked KSVs: %d\n", vrl_idx++,
vrl_ksv_cnt);
memcpy(revoked_ksv_list, buf, vrl_ksv_sz);
memcpy((*revoked_ksv_list) + (ksv_count * DRM_HDCP_KSV_LEN),
buf, vrl_ksv_sz);
ksv_count += vrl_ksv_cnt;
revoked_ksv_list += vrl_ksv_sz;
buf += vrl_ksv_sz;
parsed_bytes += (vrl_ksv_sz + 1);
@ -91,7 +83,8 @@ static inline u32 get_vrl_length(const u8 *buf)
return drm_hdcp_be24_to_cpu(buf);
}
static int drm_hdcp_parse_hdcp1_srm(const u8 *buf, size_t count)
static int drm_hdcp_parse_hdcp1_srm(const u8 *buf, size_t count,
u8 **revoked_ksv_list, u32 *revoked_ksv_cnt)
{
struct hdcp_srm_header *header;
u32 vrl_length, ksv_count;
@ -131,29 +124,28 @@ static int drm_hdcp_parse_hdcp1_srm(const u8 *buf, size_t count)
ksv_count = drm_hdcp_get_revoked_ksv_count(buf, vrl_length);
if (!ksv_count) {
DRM_DEBUG("Revoked KSV count is 0\n");
return count;
return 0;
}
kfree(srm_data->revoked_ksv_list);
srm_data->revoked_ksv_list = kcalloc(ksv_count, DRM_HDCP_KSV_LEN,
GFP_KERNEL);
if (!srm_data->revoked_ksv_list) {
*revoked_ksv_list = kcalloc(ksv_count, DRM_HDCP_KSV_LEN, GFP_KERNEL);
if (!*revoked_ksv_list) {
DRM_ERROR("Out of Memory\n");
return -ENOMEM;
}
if (drm_hdcp_get_revoked_ksvs(buf, srm_data->revoked_ksv_list,
if (drm_hdcp_get_revoked_ksvs(buf, revoked_ksv_list,
vrl_length) != ksv_count) {
srm_data->revoked_ksv_cnt = 0;
kfree(srm_data->revoked_ksv_list);
*revoked_ksv_cnt = 0;
kfree(*revoked_ksv_list);
return -EINVAL;
}
srm_data->revoked_ksv_cnt = ksv_count;
return count;
*revoked_ksv_cnt = ksv_count;
return 0;
}
static int drm_hdcp_parse_hdcp2_srm(const u8 *buf, size_t count)
static int drm_hdcp_parse_hdcp2_srm(const u8 *buf, size_t count,
u8 **revoked_ksv_list, u32 *revoked_ksv_cnt)
{
struct hdcp_srm_header *header;
u32 vrl_length, ksv_count, ksv_sz;
@ -195,13 +187,11 @@ static int drm_hdcp_parse_hdcp2_srm(const u8 *buf, size_t count)
ksv_count = (*buf << 2) | DRM_HDCP_2_KSV_COUNT_2_LSBITS(*(buf + 1));
if (!ksv_count) {
DRM_DEBUG("Revoked KSV count is 0\n");
return count;
return 0;
}
kfree(srm_data->revoked_ksv_list);
srm_data->revoked_ksv_list = kcalloc(ksv_count, DRM_HDCP_KSV_LEN,
GFP_KERNEL);
if (!srm_data->revoked_ksv_list) {
*revoked_ksv_list = kcalloc(ksv_count, DRM_HDCP_KSV_LEN, GFP_KERNEL);
if (!*revoked_ksv_list) {
DRM_ERROR("Out of Memory\n");
return -ENOMEM;
}
@ -210,10 +200,10 @@ static int drm_hdcp_parse_hdcp2_srm(const u8 *buf, size_t count)
buf += DRM_HDCP_2_NO_OF_DEV_PLUS_RESERVED_SZ;
DRM_DEBUG("Revoked KSVs: %d\n", ksv_count);
memcpy(srm_data->revoked_ksv_list, buf, ksv_sz);
memcpy(*revoked_ksv_list, buf, ksv_sz);
srm_data->revoked_ksv_cnt = ksv_count;
return count;
*revoked_ksv_cnt = ksv_count;
return 0;
}
static inline bool is_srm_version_hdcp1(const u8 *buf)
@ -226,22 +216,27 @@ static inline bool is_srm_version_hdcp2(const u8 *buf)
return *buf == (u8)(DRM_HDCP_2_SRM_ID << 4 | DRM_HDCP_2_INDICATOR);
}
static void drm_hdcp_srm_update(const u8 *buf, size_t count)
static int drm_hdcp_srm_update(const u8 *buf, size_t count,
u8 **revoked_ksv_list, u32 *revoked_ksv_cnt)
{
if (count < sizeof(struct hdcp_srm_header))
return;
return -EINVAL;
if (is_srm_version_hdcp1(buf))
drm_hdcp_parse_hdcp1_srm(buf, count);
return drm_hdcp_parse_hdcp1_srm(buf, count, revoked_ksv_list,
revoked_ksv_cnt);
else if (is_srm_version_hdcp2(buf))
drm_hdcp_parse_hdcp2_srm(buf, count);
return drm_hdcp_parse_hdcp2_srm(buf, count, revoked_ksv_list,
revoked_ksv_cnt);
else
return -EINVAL;
}
static void drm_hdcp_request_srm(struct drm_device *drm_dev)
static int drm_hdcp_request_srm(struct drm_device *drm_dev,
u8 **revoked_ksv_list, u32 *revoked_ksv_cnt)
{
char fw_name[36] = "display_hdcp_srm.bin";
const struct firmware *fw;
int ret;
ret = request_firmware_direct(&fw, (const char *)fw_name,
@ -250,10 +245,12 @@ static void drm_hdcp_request_srm(struct drm_device *drm_dev)
goto exit;
if (fw->size && fw->data)
drm_hdcp_srm_update(fw->data, fw->size);
ret = drm_hdcp_srm_update(fw->data, fw->size, revoked_ksv_list,
revoked_ksv_cnt);
exit:
release_firmware(fw);
return ret;
}
/**
@ -279,71 +276,34 @@ exit:
* https://www.digital-cp.com/sites/default/files/specifications/HDCP%20on%20HDMI%20Specification%20Rev2_2_Final1.pdf
*
* Returns:
* TRUE on any of the KSV is revoked, else FALSE.
* Count of the revoked KSVs or -ve error number incase of the failure.
*/
bool drm_hdcp_check_ksvs_revoked(struct drm_device *drm_dev, u8 *ksvs,
u32 ksv_count)
int drm_hdcp_check_ksvs_revoked(struct drm_device *drm_dev, u8 *ksvs,
u32 ksv_count)
{
u32 rev_ksv_cnt, cnt, i, j;
u8 *rev_ksv_list;
u32 revoked_ksv_cnt = 0, i, j;
u8 *revoked_ksv_list = NULL;
int ret = 0;
if (!srm_data)
return false;
ret = drm_hdcp_request_srm(drm_dev, &revoked_ksv_list,
&revoked_ksv_cnt);
mutex_lock(&srm_data->mutex);
drm_hdcp_request_srm(drm_dev);
/* revoked_ksv_cnt will be zero when above function failed */
for (i = 0; i < revoked_ksv_cnt; i++)
for (j = 0; j < ksv_count; j++)
if (!memcmp(&ksvs[j * DRM_HDCP_KSV_LEN],
&revoked_ksv_list[i * DRM_HDCP_KSV_LEN],
DRM_HDCP_KSV_LEN)) {
DRM_DEBUG("Revoked KSV is ");
drm_hdcp_print_ksv(&ksvs[j * DRM_HDCP_KSV_LEN]);
ret++;
}
rev_ksv_cnt = srm_data->revoked_ksv_cnt;
rev_ksv_list = srm_data->revoked_ksv_list;
/* If the Revoked ksv list is empty */
if (!rev_ksv_cnt || !rev_ksv_list) {
mutex_unlock(&srm_data->mutex);
return false;
}
for (cnt = 0; cnt < ksv_count; cnt++) {
rev_ksv_list = srm_data->revoked_ksv_list;
for (i = 0; i < rev_ksv_cnt; i++) {
for (j = 0; j < DRM_HDCP_KSV_LEN; j++)
if (ksvs[j] != rev_ksv_list[j]) {
break;
} else if (j == (DRM_HDCP_KSV_LEN - 1)) {
DRM_DEBUG("Revoked KSV is ");
drm_hdcp_print_ksv(ksvs);
mutex_unlock(&srm_data->mutex);
return true;
}
/* Move the offset to next KSV in the revoked list */
rev_ksv_list += DRM_HDCP_KSV_LEN;
}
/* Iterate to next ksv_offset */
ksvs += DRM_HDCP_KSV_LEN;
}
mutex_unlock(&srm_data->mutex);
return false;
kfree(revoked_ksv_list);
return ret;
}
EXPORT_SYMBOL_GPL(drm_hdcp_check_ksvs_revoked);
int drm_setup_hdcp_srm(struct class *drm_class)
{
srm_data = kzalloc(sizeof(*srm_data), GFP_KERNEL);
if (!srm_data)
return -ENOMEM;
mutex_init(&srm_data->mutex);
return 0;
}
void drm_teardown_hdcp_srm(struct class *drm_class)
{
if (srm_data) {
kfree(srm_data->revoked_ksv_list);
kfree(srm_data);
}
}
static struct drm_prop_enum_list drm_cp_enum_list[] = {
{ DRM_MODE_CONTENT_PROTECTION_UNDESIRED, "Undesired" },
{ DRM_MODE_CONTENT_PROTECTION_DESIRED, "Desired" },

View File

@ -236,7 +236,3 @@ int drm_syncobj_query_ioctl(struct drm_device *dev, void *data,
void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_framebuffer *fb);
int drm_framebuffer_debugfs_init(struct drm_minor *minor);
/* drm_hdcp.c */
int drm_setup_hdcp_srm(struct class *drm_class);
void drm_teardown_hdcp_srm(struct class *drm_class);

View File

@ -45,6 +45,7 @@
#include <linux/export.h>
#include <linux/interval_tree_generic.h>
#include <linux/seq_file.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/stacktrace.h>
@ -366,6 +367,11 @@ next_hole(struct drm_mm *mm,
struct drm_mm_node *node,
enum drm_mm_insert_mode mode)
{
/* Searching is slow; check if we ran out of time/patience */
cond_resched();
if (fatal_signal_pending(current))
return NULL;
switch (mode) {
default:
case DRM_MM_INSERT_BEST:
@ -557,7 +563,7 @@ int drm_mm_insert_node_in_range(struct drm_mm * const mm,
return 0;
}
return -ENOSPC;
return signal_pending(current) ? -ERESTARTSYS : -ENOSPC;
}
EXPORT_SYMBOL(drm_mm_insert_node_in_range);

View File

@ -75,7 +75,6 @@ drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t ali
return dmah;
}
EXPORT_SYMBOL(drm_pci_alloc);
/**
@ -167,6 +166,18 @@ int drm_irq_by_busid(struct drm_device *dev, void *data,
return drm_pci_irq_by_busid(dev, p);
}
void drm_pci_agp_destroy(struct drm_device *dev)
{
if (dev->agp) {
arch_phys_wc_del(dev->agp->agp_mtrr);
drm_legacy_agp_clear(dev);
kfree(dev->agp);
dev->agp = NULL;
}
}
#ifdef CONFIG_DRM_LEGACY
static void drm_pci_agp_init(struct drm_device *dev)
{
if (drm_core_check_feature(dev, DRIVER_USE_AGP)) {
@ -181,33 +192,9 @@ static void drm_pci_agp_init(struct drm_device *dev)
}
}
void drm_pci_agp_destroy(struct drm_device *dev)
{
if (dev->agp) {
arch_phys_wc_del(dev->agp->agp_mtrr);
drm_legacy_agp_clear(dev);
kfree(dev->agp);
dev->agp = NULL;
}
}
/**
* drm_get_pci_dev - Register a PCI device with the DRM subsystem
* @pdev: PCI device
* @ent: entry from the PCI ID table that matches @pdev
* @driver: DRM device driver
*
* Attempt to gets inter module "drm" information. If we are first
* then register the character device and inter module information.
* Try and register, if we fail to register, backout previous work.
*
* NOTE: This function is deprecated, please use drm_dev_alloc() and
* drm_dev_register() instead and remove your &drm_driver.load callback.
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent,
struct drm_driver *driver)
static int drm_get_pci_dev(struct pci_dev *pdev,
const struct pci_device_id *ent,
struct drm_driver *driver)
{
struct drm_device *dev;
int ret;
@ -250,9 +237,6 @@ err_free:
drm_dev_put(dev);
return ret;
}
EXPORT_SYMBOL(drm_get_pci_dev);
#ifdef CONFIG_DRM_LEGACY
/**
* drm_legacy_pci_init - shadow-attach a legacy DRM PCI driver

View File

@ -99,6 +99,9 @@ int drm_legacy_sg_alloc(struct drm_device *dev, void *data,
if (!drm_core_check_feature(dev, DRIVER_SG))
return -EOPNOTSUPP;
if (request->size > SIZE_MAX - PAGE_SIZE)
return -EINVAL;
if (dev->sg)
return -EINVAL;

View File

@ -26,12 +26,51 @@
* entity. Some flexibility for code reuse is provided through a separately
* allocated &drm_connector object and supporting optional &drm_bridge
* encoder drivers.
*
* Many drivers require only a very simple encoder that fulfills the minimum
* requirements of the display pipeline and does not add additional
* functionality. The function drm_simple_encoder_init() provides an
* implementation of such an encoder.
*/
static const struct drm_encoder_funcs drm_simple_kms_encoder_funcs = {
static const struct drm_encoder_funcs drm_simple_encoder_funcs_cleanup = {
.destroy = drm_encoder_cleanup,
};
/**
* drm_simple_encoder_init - Initialize a preallocated encoder with
* basic functionality.
* @dev: drm device
* @encoder: the encoder to initialize
* @encoder_type: user visible type of the encoder
*
* Initialises a preallocated encoder that has no further functionality.
* Settings for possible CRTC and clones are left to their initial values.
* The encoder will be cleaned up automatically as part of the mode-setting
* cleanup.
*
* The caller of drm_simple_encoder_init() is responsible for freeing
* the encoder's memory after the encoder has been cleaned up. At the
* moment this only works reliably if the encoder data structure is
* stored in the device structure. Free the encoder's memory as part of
* the device release function.
*
* FIXME: Later improvements to DRM's resource management may allow for
* an automated kfree() of the encoder's memory.
*
* Returns:
* Zero on success, error code on failure.
*/
int drm_simple_encoder_init(struct drm_device *dev,
struct drm_encoder *encoder,
int encoder_type)
{
return drm_encoder_init(dev, encoder,
&drm_simple_encoder_funcs_cleanup,
encoder_type, NULL);
}
EXPORT_SYMBOL(drm_simple_encoder_init);
static enum drm_mode_status
drm_simple_kms_crtc_mode_valid(struct drm_crtc *crtc,
const struct drm_display_mode *mode)
@ -288,8 +327,7 @@ int drm_simple_display_pipe_init(struct drm_device *dev,
return ret;
encoder->possible_crtcs = drm_crtc_mask(crtc);
ret = drm_encoder_init(dev, encoder, &drm_simple_kms_encoder_funcs,
DRM_MODE_ENCODER_NONE, NULL);
ret = drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_NONE);
if (ret || !connector)
return ret;

View File

@ -85,7 +85,6 @@ int drm_sysfs_init(void)
}
drm_class->devnode = drm_devnode;
drm_setup_hdcp_srm(drm_class);
return 0;
}
@ -98,7 +97,6 @@ void drm_sysfs_destroy(void)
{
if (IS_ERR_OR_NULL(drm_class))
return;
drm_teardown_hdcp_srm(drm_class);
class_remove_file(drm_class, &class_attr_version.attr);
class_destroy(drm_class);
drm_class = NULL;

View File

@ -592,8 +592,7 @@ EXPORT_SYMBOL(drm_calc_timestamping_constants);
/**
* drm_crtc_vblank_helper_get_vblank_timestamp_internal - precise vblank
* timestamp helper
* @dev: DRM device
* @pipe: index of CRTC whose vblank timestamp to retrieve
* @crtc: CRTC whose vblank timestamp to retrieve
* @max_error: Desired maximum allowable error in timestamps (nanosecs)
* On return contains true maximum error of timestamp
* @vblank_time: Pointer to time which should receive the timestamp

View File

@ -1514,7 +1514,6 @@ static int exynos_dsi_create_connector(struct drm_encoder *encoder)
return 0;
connector->funcs->reset(connector);
drm_fb_helper_add_one_connector(drm->fb_helper, connector);
drm_connector_register(connector);
return 0;
}

View File

@ -200,21 +200,13 @@ int exynos_drm_fbdev_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, helper, &exynos_drm_fb_helper_funcs);
ret = drm_fb_helper_init(dev, helper, MAX_CONNECTOR);
ret = drm_fb_helper_init(dev, helper);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev,
"failed to initialize drm fb helper.\n");
goto err_init;
}
ret = drm_fb_helper_single_add_all_connectors(helper);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev,
"failed to register drm_fb_helper_connector.\n");
goto err_setup;
}
ret = drm_fb_helper_initial_config(helper, PREFERRED_BPP);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev,

View File

@ -513,14 +513,10 @@ int psb_fbdev_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, fb_helper, &psb_fb_helper_funcs);
ret = drm_fb_helper_init(dev, fb_helper, INTELFB_CONN_LIMIT);
ret = drm_fb_helper_init(dev, fb_helper);
if (ret)
goto free;
ret = drm_fb_helper_single_add_all_connectors(fb_helper);
if (ret)
goto fini;
/* disable all the possible outputs/crtcs before entering KMS mode */
drm_helper_disable_unused_functions(dev);

View File

@ -227,7 +227,7 @@ struct bdb_general_definitions {
* number = (block_size - sizeof(bdb_general_definitions))/
* sizeof(child_device_config);
*/
struct child_device_config devices[0];
struct child_device_config devices[];
};
struct bdb_lvds_options {

View File

@ -721,27 +721,15 @@ err:
static void intel_dp_register_mst_connector(struct drm_connector *connector)
{
struct drm_i915_private *dev_priv = to_i915(connector->dev);
if (dev_priv->fbdev)
drm_fb_helper_add_one_connector(&dev_priv->fbdev->helper,
connector);
drm_connector_register(connector);
}
static void intel_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
struct drm_connector *connector)
{
struct drm_i915_private *dev_priv = to_i915(connector->dev);
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", connector->base.id, connector->name);
drm_connector_unregister(connector);
if (dev_priv->fbdev)
drm_fb_helper_remove_one_connector(&dev_priv->fbdev->helper,
connector);
drm_connector_put(connector);
}

View File

@ -453,7 +453,7 @@ int intel_fbdev_init(struct drm_device *dev)
if (!intel_fbdev_init_bios(dev, ifbdev))
ifbdev->preferred_bpp = 32;
ret = drm_fb_helper_init(dev, &ifbdev->helper, 4);
ret = drm_fb_helper_init(dev, &ifbdev->helper);
if (ret) {
kfree(ifbdev);
return ret;
@ -462,8 +462,6 @@ int intel_fbdev_init(struct drm_device *dev)
dev_priv->fbdev = ifbdev;
INIT_WORK(&dev_priv->fbdev_suspend_work, intel_fbdev_suspend_worker);
drm_fb_helper_single_add_all_connectors(&ifbdev->helper);
return 0;
}

View File

@ -95,7 +95,6 @@
#define MATROX_DPMS_CLEARED (-1)
#define to_mga_crtc(x) container_of(x, struct mga_crtc, base)
#define to_mga_encoder(x) container_of(x, struct mga_encoder, base)
#define to_mga_connector(x) container_of(x, struct mga_connector, base)
struct mga_crtc {
@ -110,12 +109,6 @@ struct mga_mode_info {
struct mga_crtc *crtc;
};
struct mga_encoder {
struct drm_encoder base;
int last_dpms;
};
struct mga_i2c_chan {
struct i2c_adapter adapter;
struct drm_device *dev;
@ -185,6 +178,8 @@ struct mga_device {
/* SE model number stored in reg 0x1e24 */
u32 unique_rev_id;
struct drm_encoder encoder;
};
static inline enum mga_type

View File

@ -15,6 +15,7 @@
#include <drm/drm_fourcc.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "mgag200_drv.h"
@ -1449,76 +1450,6 @@ static void mga_crtc_init(struct mga_device *mdev)
drm_crtc_helper_add(&mga_crtc->base, &mga_helper_funcs);
}
/*
* The encoder comes after the CRTC in the output pipeline, but before
* the connector. It's responsible for ensuring that the digital
* stream is appropriately converted into the output format. Setup is
* very simple in this case - all we have to do is inform qemu of the
* colour depth in order to ensure that it displays appropriately
*/
/*
* These functions are analagous to those in the CRTC code, but are intended
* to handle any encoder-specific limitations
*/
static void mga_encoder_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
}
static void mga_encoder_dpms(struct drm_encoder *encoder, int state)
{
return;
}
static void mga_encoder_prepare(struct drm_encoder *encoder)
{
}
static void mga_encoder_commit(struct drm_encoder *encoder)
{
}
static void mga_encoder_destroy(struct drm_encoder *encoder)
{
struct mga_encoder *mga_encoder = to_mga_encoder(encoder);
drm_encoder_cleanup(encoder);
kfree(mga_encoder);
}
static const struct drm_encoder_helper_funcs mga_encoder_helper_funcs = {
.dpms = mga_encoder_dpms,
.mode_set = mga_encoder_mode_set,
.prepare = mga_encoder_prepare,
.commit = mga_encoder_commit,
};
static const struct drm_encoder_funcs mga_encoder_encoder_funcs = {
.destroy = mga_encoder_destroy,
};
static struct drm_encoder *mga_encoder_init(struct drm_device *dev)
{
struct drm_encoder *encoder;
struct mga_encoder *mga_encoder;
mga_encoder = kzalloc(sizeof(struct mga_encoder), GFP_KERNEL);
if (!mga_encoder)
return NULL;
encoder = &mga_encoder->base;
encoder->possible_crtcs = 0x1;
drm_encoder_init(dev, encoder, &mga_encoder_encoder_funcs,
DRM_MODE_ENCODER_DAC, NULL);
drm_encoder_helper_add(encoder, &mga_encoder_helper_funcs);
return encoder;
}
static int mga_vga_get_modes(struct drm_connector *connector)
{
struct mga_connector *mga_connector = to_mga_connector(connector);
@ -1686,8 +1617,9 @@ static struct drm_connector *mga_vga_init(struct drm_device *dev)
int mgag200_modeset_init(struct mga_device *mdev)
{
struct drm_encoder *encoder;
struct drm_encoder *encoder = &mdev->encoder;
struct drm_connector *connector;
int ret;
mdev->mode_info.mode_config_initialized = true;
@ -1698,11 +1630,15 @@ int mgag200_modeset_init(struct mga_device *mdev)
mga_crtc_init(mdev);
encoder = mga_encoder_init(mdev->dev);
if (!encoder) {
DRM_ERROR("mga_encoder_init failed\n");
return -1;
ret = drm_simple_encoder_init(mdev->dev, encoder,
DRM_MODE_ENCODER_DAC);
if (ret) {
drm_err(mdev->dev,
"drm_simple_encoder_init() failed, error %d\n",
ret);
return ret;
}
encoder->possible_crtcs = 0x1;
connector = mga_vga_init(mdev->dev);
if (!connector) {

View File

@ -160,16 +160,12 @@ struct drm_fb_helper *msm_fbdev_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, helper, &msm_fb_helper_funcs);
ret = drm_fb_helper_init(dev, helper, priv->num_connectors);
ret = drm_fb_helper_init(dev, helper);
if (ret) {
DRM_DEV_ERROR(dev->dev, "could not init fbdev: ret=%d\n", ret);
goto fail;
}
ret = drm_fb_helper_single_add_all_connectors(helper);
if (ret)
goto fini;
/* the fw fb could be anywhere in memory */
drm_fb_helper_remove_conflicting_framebuffers(NULL, "msm", false);

View File

@ -1260,23 +1260,16 @@ static void
nv50_mstm_destroy_connector(struct drm_dp_mst_topology_mgr *mgr,
struct drm_connector *connector)
{
struct nouveau_drm *drm = nouveau_drm(connector->dev);
struct nv50_mstc *mstc = nv50_mstc(connector);
drm_connector_unregister(&mstc->connector);
drm_fb_helper_remove_one_connector(&drm->fbcon->helper, &mstc->connector);
drm_connector_put(&mstc->connector);
}
static void
nv50_mstm_register_connector(struct drm_connector *connector)
{
struct nouveau_drm *drm = nouveau_drm(connector->dev);
drm_fb_helper_add_one_connector(&drm->fbcon->helper, connector);
drm_connector_register(connector);
}

View File

@ -558,14 +558,10 @@ nouveau_fbcon_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, &fbcon->helper, &nouveau_fbcon_helper_funcs);
ret = drm_fb_helper_init(dev, &fbcon->helper, 4);
ret = drm_fb_helper_init(dev, &fbcon->helper);
if (ret)
goto free;
ret = drm_fb_helper_single_add_all_connectors(&fbcon->helper);
if (ret)
goto fini;
if (preferred_bpp != 8 && preferred_bpp != 16 && preferred_bpp != 32) {
if (drm->client.device.info.ram_size <= 32 * 1024 * 1024)
preferred_bpp = 8;

View File

@ -242,14 +242,10 @@ void omap_fbdev_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, helper, &omap_fb_helper_funcs);
ret = drm_fb_helper_init(dev, helper, priv->num_pipes);
ret = drm_fb_helper_init(dev, helper);
if (ret)
goto fail;
ret = drm_fb_helper_single_add_all_connectors(helper);
if (ret)
goto fini;
ret = drm_fb_helper_initial_config(helper, 32);
if (ret)
goto fini;

View File

@ -59,6 +59,16 @@ config DRM_PANEL_SIMPLE
that it can be automatically turned off when the panel goes into a
low power state.
config DRM_PANEL_ELIDA_KD35T133
tristate "Elida KD35T133 panel driver"
depends on OF
depends on DRM_MIPI_DSI
depends on BACKLIGHT_CLASS_DEVICE
help
Say Y here if you want to enable support for the Elida
KD35T133 controller for 320x480 LCD panels with MIPI-DSI
system interfaces.
config DRM_PANEL_FEIXIN_K101_IM2BA02
tristate "Feixin K101 IM2BA02 panel"
depends on OF
@ -167,6 +177,16 @@ config DRM_PANEL_NEC_NL8048HL11
panel (found on the Zoom2/3/3630 SDP boards). To compile this driver
as a module, choose M here.
config DRM_PANEL_NOVATEK_NT35510
tristate "Novatek NT35510 RGB panel driver"
depends on OF
depends on DRM_MIPI_DSI
depends on BACKLIGHT_CLASS_DEVICE
help
Say Y here if you want to enable support for the panels built
around the Novatek NT35510 display controller, such as some
Hydis panels.
config DRM_PANEL_NOVATEK_NT39016
tristate "Novatek NT39016 RGB/SPI panel"
depends on OF && SPI

View File

@ -4,6 +4,7 @@ obj-$(CONFIG_DRM_PANEL_BOE_HIMAX8279D) += panel-boe-himax8279d.o
obj-$(CONFIG_DRM_PANEL_BOE_TV101WUM_NL6) += panel-boe-tv101wum-nl6.o
obj-$(CONFIG_DRM_PANEL_LVDS) += panel-lvds.o
obj-$(CONFIG_DRM_PANEL_SIMPLE) += panel-simple.o
obj-$(CONFIG_DRM_PANEL_ELIDA_KD35T133) += panel-elida-kd35t133.o
obj-$(CONFIG_DRM_PANEL_FEIXIN_K101_IM2BA02) += panel-feixin-k101-im2ba02.o
obj-$(CONFIG_DRM_PANEL_FEIYANG_FY07024DI26A30D) += panel-feiyang-fy07024di26a30d.o
obj-$(CONFIG_DRM_PANEL_ILITEK_IL9322) += panel-ilitek-ili9322.o
@ -15,6 +16,7 @@ obj-$(CONFIG_DRM_PANEL_LEADTEK_LTK500HD1829) += panel-leadtek-ltk500hd1829.o
obj-$(CONFIG_DRM_PANEL_LG_LB035Q02) += panel-lg-lb035q02.o
obj-$(CONFIG_DRM_PANEL_LG_LG4573) += panel-lg-lg4573.o
obj-$(CONFIG_DRM_PANEL_NEC_NL8048HL11) += panel-nec-nl8048hl11.o
obj-$(CONFIG_DRM_PANEL_NOVATEK_NT35510) += panel-novatek-nt35510.o
obj-$(CONFIG_DRM_PANEL_NOVATEK_NT39016) += panel-novatek-nt39016.o
obj-$(CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO) += panel-olimex-lcd-olinuxino.o
obj-$(CONFIG_DRM_PANEL_ORISETECH_OTM8009A) += panel-orisetech-otm8009a.o

View File

@ -0,0 +1,352 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Elida kd35t133 5.5" MIPI-DSI panel driver
* Copyright (C) 2020 Theobroma Systems Design und Consulting GmbH
*
* based on
*
* Rockteck jh057n00900 5.5" MIPI-DSI panel driver
* Copyright (C) Purism SPC 2019
*/
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/media-bus-format.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
#include <video/display_timing.h>
#include <video/mipi_display.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_modes.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
/* Manufacturer specific Commands send via DSI */
#define KD35T133_CMD_INTERFACEMODECTRL 0xb0
#define KD35T133_CMD_FRAMERATECTRL 0xb1
#define KD35T133_CMD_DISPLAYINVERSIONCTRL 0xb4
#define KD35T133_CMD_DISPLAYFUNCTIONCTRL 0xb6
#define KD35T133_CMD_POWERCONTROL1 0xc0
#define KD35T133_CMD_POWERCONTROL2 0xc1
#define KD35T133_CMD_VCOMCONTROL 0xc5
#define KD35T133_CMD_POSITIVEGAMMA 0xe0
#define KD35T133_CMD_NEGATIVEGAMMA 0xe1
#define KD35T133_CMD_SETIMAGEFUNCTION 0xe9
#define KD35T133_CMD_ADJUSTCONTROL3 0xf7
struct kd35t133 {
struct device *dev;
struct drm_panel panel;
struct gpio_desc *reset_gpio;
struct regulator *vdd;
struct regulator *iovcc;
bool prepared;
};
static inline struct kd35t133 *panel_to_kd35t133(struct drm_panel *panel)
{
return container_of(panel, struct kd35t133, panel);
}
#define dsi_dcs_write_seq(dsi, cmd, seq...) do { \
static const u8 d[] = { seq }; \
int ret; \
ret = mipi_dsi_dcs_write(dsi, cmd, d, ARRAY_SIZE(d)); \
if (ret < 0) \
return ret; \
} while (0)
static int kd35t133_init_sequence(struct kd35t133 *ctx)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
struct device *dev = ctx->dev;
/*
* Init sequence was supplied by the panel vendor with minimal
* documentation.
*/
dsi_dcs_write_seq(dsi, KD35T133_CMD_POSITIVEGAMMA,
0x00, 0x13, 0x18, 0x04, 0x0f, 0x06, 0x3a, 0x56,
0x4d, 0x03, 0x0a, 0x06, 0x30, 0x3e, 0x0f);
dsi_dcs_write_seq(dsi, KD35T133_CMD_NEGATIVEGAMMA,
0x00, 0x13, 0x18, 0x01, 0x11, 0x06, 0x38, 0x34,
0x4d, 0x06, 0x0d, 0x0b, 0x31, 0x37, 0x0f);
dsi_dcs_write_seq(dsi, KD35T133_CMD_POWERCONTROL1, 0x18, 0x17);
dsi_dcs_write_seq(dsi, KD35T133_CMD_POWERCONTROL2, 0x41);
dsi_dcs_write_seq(dsi, KD35T133_CMD_VCOMCONTROL, 0x00, 0x1a, 0x80);
dsi_dcs_write_seq(dsi, MIPI_DCS_SET_ADDRESS_MODE, 0x48);
dsi_dcs_write_seq(dsi, MIPI_DCS_SET_PIXEL_FORMAT, 0x55);
dsi_dcs_write_seq(dsi, KD35T133_CMD_INTERFACEMODECTRL, 0x00);
dsi_dcs_write_seq(dsi, KD35T133_CMD_FRAMERATECTRL, 0xa0);
dsi_dcs_write_seq(dsi, KD35T133_CMD_DISPLAYINVERSIONCTRL, 0x02);
dsi_dcs_write_seq(dsi, KD35T133_CMD_DISPLAYFUNCTIONCTRL,
0x20, 0x02);
dsi_dcs_write_seq(dsi, KD35T133_CMD_SETIMAGEFUNCTION, 0x00);
dsi_dcs_write_seq(dsi, KD35T133_CMD_ADJUSTCONTROL3,
0xa9, 0x51, 0x2c, 0x82);
mipi_dsi_dcs_write(dsi, MIPI_DCS_ENTER_INVERT_MODE, NULL, 0);
DRM_DEV_DEBUG_DRIVER(dev, "Panel init sequence done\n");
return 0;
}
static int kd35t133_unprepare(struct drm_panel *panel)
{
struct kd35t133 *ctx = panel_to_kd35t133(panel);
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
int ret;
if (!ctx->prepared)
return 0;
ret = mipi_dsi_dcs_set_display_off(dsi);
if (ret < 0)
DRM_DEV_ERROR(ctx->dev, "failed to set display off: %d\n",
ret);
ret = mipi_dsi_dcs_enter_sleep_mode(dsi);
if (ret < 0) {
DRM_DEV_ERROR(ctx->dev, "failed to enter sleep mode: %d\n",
ret);
return ret;
}
regulator_disable(ctx->iovcc);
regulator_disable(ctx->vdd);
ctx->prepared = false;
return 0;
}
static int kd35t133_prepare(struct drm_panel *panel)
{
struct kd35t133 *ctx = panel_to_kd35t133(panel);
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
int ret;
if (ctx->prepared)
return 0;
DRM_DEV_DEBUG_DRIVER(ctx->dev, "Resetting the panel\n");
ret = regulator_enable(ctx->vdd);
if (ret < 0) {
DRM_DEV_ERROR(ctx->dev,
"Failed to enable vdd supply: %d\n", ret);
return ret;
}
ret = regulator_enable(ctx->iovcc);
if (ret < 0) {
DRM_DEV_ERROR(ctx->dev,
"Failed to enable iovcc supply: %d\n", ret);
goto disable_vdd;
}
msleep(20);
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
usleep_range(10, 20);
gpiod_set_value_cansleep(ctx->reset_gpio, 0);
msleep(20);
ret = mipi_dsi_dcs_exit_sleep_mode(dsi);
if (ret < 0) {
DRM_DEV_ERROR(ctx->dev, "Failed to exit sleep mode: %d\n", ret);
goto disable_iovcc;
}
msleep(250);
ret = kd35t133_init_sequence(ctx);
if (ret < 0) {
DRM_DEV_ERROR(ctx->dev, "Panel init sequence failed: %d\n",
ret);
goto disable_iovcc;
}
ret = mipi_dsi_dcs_set_display_on(dsi);
if (ret < 0) {
DRM_DEV_ERROR(ctx->dev, "Failed to set display on: %d\n", ret);
goto disable_iovcc;
}
msleep(50);
ctx->prepared = true;
return 0;
disable_iovcc:
regulator_disable(ctx->iovcc);
disable_vdd:
regulator_disable(ctx->vdd);
return ret;
}
static const struct drm_display_mode default_mode = {
.hdisplay = 320,
.hsync_start = 320 + 130,
.hsync_end = 320 + 130 + 4,
.htotal = 320 + 130 + 4 + 130,
.vdisplay = 480,
.vsync_start = 480 + 2,
.vsync_end = 480 + 2 + 1,
.vtotal = 480 + 2 + 1 + 2,
.vrefresh = 60,
.clock = 17000,
.width_mm = 42,
.height_mm = 82,
};
static int kd35t133_get_modes(struct drm_panel *panel,
struct drm_connector *connector)
{
struct kd35t133 *ctx = panel_to_kd35t133(panel);
struct drm_display_mode *mode;
mode = drm_mode_duplicate(connector->dev, &default_mode);
if (!mode) {
DRM_DEV_ERROR(ctx->dev, "Failed to add mode %ux%u@%u\n",
default_mode.hdisplay, default_mode.vdisplay,
default_mode.vrefresh);
return -ENOMEM;
}
drm_mode_set_name(mode);
mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
connector->display_info.width_mm = mode->width_mm;
connector->display_info.height_mm = mode->height_mm;
drm_mode_probed_add(connector, mode);
return 1;
}
static const struct drm_panel_funcs kd35t133_funcs = {
.unprepare = kd35t133_unprepare,
.prepare = kd35t133_prepare,
.get_modes = kd35t133_get_modes,
};
static int kd35t133_probe(struct mipi_dsi_device *dsi)
{
struct device *dev = &dsi->dev;
struct kd35t133 *ctx;
int ret;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(ctx->reset_gpio)) {
DRM_DEV_ERROR(dev, "cannot get reset gpio\n");
return PTR_ERR(ctx->reset_gpio);
}
ctx->vdd = devm_regulator_get(dev, "vdd");
if (IS_ERR(ctx->vdd)) {
ret = PTR_ERR(ctx->vdd);
if (ret != -EPROBE_DEFER)
DRM_DEV_ERROR(dev,
"Failed to request vdd regulator: %d\n",
ret);
return ret;
}
ctx->iovcc = devm_regulator_get(dev, "iovcc");
if (IS_ERR(ctx->iovcc)) {
ret = PTR_ERR(ctx->iovcc);
if (ret != -EPROBE_DEFER)
DRM_DEV_ERROR(dev,
"Failed to request iovcc regulator: %d\n",
ret);
return ret;
}
mipi_dsi_set_drvdata(dsi, ctx);
ctx->dev = dev;
dsi->lanes = 1;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_EOT_PACKET;
drm_panel_init(&ctx->panel, &dsi->dev, &kd35t133_funcs,
DRM_MODE_CONNECTOR_DSI);
ret = drm_panel_of_backlight(&ctx->panel);
if (ret)
return ret;
drm_panel_add(&ctx->panel);
ret = mipi_dsi_attach(dsi);
if (ret < 0) {
DRM_DEV_ERROR(dev, "mipi_dsi_attach failed: %d\n", ret);
drm_panel_remove(&ctx->panel);
return ret;
}
return 0;
}
static void kd35t133_shutdown(struct mipi_dsi_device *dsi)
{
struct kd35t133 *ctx = mipi_dsi_get_drvdata(dsi);
int ret;
ret = drm_panel_unprepare(&ctx->panel);
if (ret < 0)
DRM_DEV_ERROR(&dsi->dev, "Failed to unprepare panel: %d\n",
ret);
ret = drm_panel_disable(&ctx->panel);
if (ret < 0)
DRM_DEV_ERROR(&dsi->dev, "Failed to disable panel: %d\n",
ret);
}
static int kd35t133_remove(struct mipi_dsi_device *dsi)
{
struct kd35t133 *ctx = mipi_dsi_get_drvdata(dsi);
int ret;
kd35t133_shutdown(dsi);
ret = mipi_dsi_detach(dsi);
if (ret < 0)
DRM_DEV_ERROR(&dsi->dev, "Failed to detach from DSI host: %d\n",
ret);
drm_panel_remove(&ctx->panel);
return 0;
}
static const struct of_device_id kd35t133_of_match[] = {
{ .compatible = "elida,kd35t133" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, kd35t133_of_match);
static struct mipi_dsi_driver kd35t133_driver = {
.driver = {
.name = "panel-elida-kd35t133",
.of_match_table = kd35t133_of_match,
},
.probe = kd35t133_probe,
.remove = kd35t133_remove,
.shutdown = kd35t133_shutdown,
};
module_mipi_dsi_driver(kd35t133_driver);
MODULE_AUTHOR("Heiko Stuebner <heiko.stuebner@theobroma-systems.com>");
MODULE_DESCRIPTION("DRM driver for Elida kd35t133 MIPI DSI panel");
MODULE_LICENSE("GPL v2");

File diff suppressed because it is too large Load Diff

View File

@ -351,6 +351,65 @@ static const struct drm_panel_funcs panel_simple_funcs = {
.get_timings = panel_simple_get_timings,
};
static struct panel_desc panel_dpi;
static int panel_dpi_probe(struct device *dev,
struct panel_simple *panel)
{
struct display_timing *timing;
const struct device_node *np;
struct panel_desc *desc;
unsigned int bus_flags;
struct videomode vm;
const char *mapping;
int ret;
np = dev->of_node;
desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL);
if (!desc)
return -ENOMEM;
timing = devm_kzalloc(dev, sizeof(*timing), GFP_KERNEL);
if (!timing)
return -ENOMEM;
ret = of_get_display_timing(np, "panel-timing", timing);
if (ret < 0) {
dev_err(dev, "%pOF: no panel-timing node found for \"panel-dpi\" binding\n",
np);
return ret;
}
desc->timings = timing;
desc->num_timings = 1;
of_property_read_u32(np, "width-mm", &desc->size.width);
of_property_read_u32(np, "height-mm", &desc->size.height);
of_property_read_string(np, "data-mapping", &mapping);
if (!strcmp(mapping, "rgb24"))
desc->bus_format = MEDIA_BUS_FMT_RGB888_1X24;
else if (!strcmp(mapping, "rgb565"))
desc->bus_format = MEDIA_BUS_FMT_RGB565_1X16;
else if (!strcmp(mapping, "bgr666"))
desc->bus_format = MEDIA_BUS_FMT_RGB666_1X18;
else if (!strcmp(mapping, "lvds666"))
desc->bus_format = MEDIA_BUS_FMT_RGB666_1X24_CPADHI;
/* Extract bus_flags from display_timing */
bus_flags = 0;
vm.flags = timing->flags;
drm_bus_flags_from_videomode(&vm, &bus_flags);
desc->bus_flags = bus_flags;
/* We do not know the connector for the DT node, so guess it */
desc->connector_type = DRM_MODE_CONNECTOR_DPI;
panel->desc = desc;
return 0;
}
#define PANEL_SIMPLE_BOUNDS_CHECK(to_check, bounds, field) \
(to_check->field.typ >= bounds->field.min && \
to_check->field.typ <= bounds->field.max)
@ -437,8 +496,15 @@ static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
return -EPROBE_DEFER;
}
if (!of_get_display_timing(dev->of_node, "panel-timing", &dt))
panel_simple_parse_panel_timing_node(dev, panel, &dt);
if (desc == &panel_dpi) {
/* Handle the generic panel-dpi binding */
err = panel_dpi_probe(dev, panel);
if (err)
goto free_ddc;
} else {
if (!of_get_display_timing(dev->of_node, "panel-timing", &dt))
panel_simple_parse_panel_timing_node(dev, panel, &dt);
}
drm_panel_init(&panel->base, dev, &panel_simple_funcs,
desc->connector_type);
@ -2340,6 +2406,51 @@ static const struct panel_desc netron_dy_e231732 = {
.bus_format = MEDIA_BUS_FMT_RGB666_1X18,
};
static const struct drm_display_mode neweast_wjfh116008a_modes[] = {
{
.clock = 138500,
.hdisplay = 1920,
.hsync_start = 1920 + 48,
.hsync_end = 1920 + 48 + 32,
.htotal = 1920 + 48 + 32 + 80,
.vdisplay = 1080,
.vsync_start = 1080 + 3,
.vsync_end = 1080 + 3 + 5,
.vtotal = 1080 + 3 + 5 + 23,
.vrefresh = 60,
.flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
}, {
.clock = 110920,
.hdisplay = 1920,
.hsync_start = 1920 + 48,
.hsync_end = 1920 + 48 + 32,
.htotal = 1920 + 48 + 32 + 80,
.vdisplay = 1080,
.vsync_start = 1080 + 3,
.vsync_end = 1080 + 3 + 5,
.vtotal = 1080 + 3 + 5 + 23,
.vrefresh = 48,
.flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
}
};
static const struct panel_desc neweast_wjfh116008a = {
.modes = neweast_wjfh116008a_modes,
.num_modes = 2,
.bpc = 6,
.size = {
.width = 260,
.height = 150,
},
.delay = {
.prepare = 110,
.enable = 20,
.unprepare = 500,
},
.bus_format = MEDIA_BUS_FMT_RGB666_1X18,
.connector_type = DRM_MODE_CONNECTOR_eDP,
};
static const struct drm_display_mode newhaven_nhd_43_480272ef_atxl_mode = {
.clock = 9000,
.hdisplay = 480,
@ -2914,30 +3025,6 @@ static const struct panel_desc sharp_lq123p1jx31 = {
},
};
static const struct drm_display_mode sharp_lq150x1lg11_mode = {
.clock = 71100,
.hdisplay = 1024,
.hsync_start = 1024 + 168,
.hsync_end = 1024 + 168 + 64,
.htotal = 1024 + 168 + 64 + 88,
.vdisplay = 768,
.vsync_start = 768 + 37,
.vsync_end = 768 + 37 + 2,
.vtotal = 768 + 37 + 2 + 8,
.vrefresh = 60,
};
static const struct panel_desc sharp_lq150x1lg11 = {
.modes = &sharp_lq150x1lg11_mode,
.num_modes = 1,
.bpc = 6,
.size = {
.width = 304,
.height = 228,
},
.bus_format = MEDIA_BUS_FMT_RGB565_1X16,
};
static const struct display_timing sharp_ls020b1dd01d_timing = {
.pixelclock = { 2000000, 4200000, 5000000 },
.hactive = { 240, 240, 240 },
@ -3560,6 +3647,9 @@ static const struct of_device_id platform_of_match[] = {
}, {
.compatible = "netron-dy,e231732",
.data = &netron_dy_e231732,
}, {
.compatible = "neweast,wjfh116008a",
.data = &neweast_wjfh116008a,
}, {
.compatible = "newhaven,nhd-4.3-480272ef-atxl",
.data = &newhaven_nhd_43_480272ef_atxl,
@ -3629,9 +3719,6 @@ static const struct of_device_id platform_of_match[] = {
}, {
.compatible = "sharp,lq123p1jx31",
.data = &sharp_lq123p1jx31,
}, {
.compatible = "sharp,lq150x1lg11",
.data = &sharp_lq150x1lg11,
}, {
.compatible = "sharp,ls020b1dd01d",
.data = &sharp_ls020b1dd01d,
@ -3689,6 +3776,10 @@ static const struct of_device_id platform_of_match[] = {
}, {
.compatible = "winstar,wf35ltiacd",
.data = &winstar_wf35ltiacd,
}, {
/* Must be the last entry */
.compatible = "panel-dpi",
.data = &panel_dpi,
}, {
/* sentinel */
}

View File

@ -659,7 +659,7 @@ static int panfrost_remove(struct platform_device *pdev)
return 0;
}
const char * const default_supplies[] = { "mali" };
static const char * const default_supplies[] = { "mali" };
static const struct panfrost_compatible default_data = {
.num_supplies = ARRAY_SIZE(default_supplies),
.supply_names = default_supplies,

View File

@ -31,6 +31,7 @@
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "qxl_drv.h"
#include "qxl_object.h"
@ -1007,9 +1008,6 @@ static struct drm_encoder *qxl_best_encoder(struct drm_connector *connector)
return &qxl_output->enc;
}
static const struct drm_encoder_helper_funcs qxl_enc_helper_funcs = {
};
static const struct drm_connector_helper_funcs qxl_connector_helper_funcs = {
.get_modes = qxl_conn_get_modes,
.mode_valid = qxl_conn_mode_valid,
@ -1059,15 +1057,6 @@ static const struct drm_connector_funcs qxl_connector_funcs = {
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static void qxl_enc_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
static const struct drm_encoder_funcs qxl_enc_funcs = {
.destroy = qxl_enc_destroy,
};
static int qxl_mode_create_hotplug_mode_update_property(struct qxl_device *qdev)
{
if (qdev->hotplug_mode_update_property)
@ -1086,6 +1075,7 @@ static int qdev_output_init(struct drm_device *dev, int num_output)
struct qxl_output *qxl_output;
struct drm_connector *connector;
struct drm_encoder *encoder;
int ret;
qxl_output = kzalloc(sizeof(struct qxl_output), GFP_KERNEL);
if (!qxl_output)
@ -1098,15 +1088,19 @@ static int qdev_output_init(struct drm_device *dev, int num_output)
drm_connector_init(dev, &qxl_output->base,
&qxl_connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
drm_encoder_init(dev, &qxl_output->enc, &qxl_enc_funcs,
DRM_MODE_ENCODER_VIRTUAL, NULL);
ret = drm_simple_encoder_init(dev, &qxl_output->enc,
DRM_MODE_ENCODER_VIRTUAL);
if (ret) {
drm_err(dev, "drm_simple_encoder_init() failed, error %d\n",
ret);
goto err_drm_connector_cleanup;
}
/* we get HPD via client monitors config */
connector->polled = DRM_CONNECTOR_POLL_HPD;
encoder->possible_crtcs = 1 << num_output;
drm_connector_attach_encoder(&qxl_output->base,
&qxl_output->enc);
drm_encoder_helper_add(encoder, &qxl_enc_helper_funcs);
drm_connector_helper_add(connector, &qxl_connector_helper_funcs);
drm_object_attach_property(&connector->base,
@ -1116,6 +1110,11 @@ static int qdev_output_init(struct drm_device *dev, int num_output)
drm_object_attach_property(&connector->base,
dev->mode_config.suggested_y_property, 0);
return 0;
err_drm_connector_cleanup:
drm_connector_cleanup(&qxl_output->base);
kfree(qxl_output);
return ret;
}
static struct drm_framebuffer *

View File

@ -303,23 +303,13 @@ static struct drm_connector *radeon_dp_add_mst_connector(struct drm_dp_mst_topol
static void radeon_dp_register_mst_connector(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct radeon_device *rdev = dev->dev_private;
radeon_fb_add_connector(rdev, connector);
drm_connector_register(connector);
}
static void radeon_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
struct drm_connector *connector)
{
struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
struct drm_device *dev = master->base.dev;
struct radeon_device *rdev = dev->dev_private;
drm_connector_unregister(connector);
radeon_fb_remove_connector(rdev, connector);
drm_connector_cleanup(connector);
kfree(connector);

View File

@ -354,15 +354,10 @@ int radeon_fbdev_init(struct radeon_device *rdev)
drm_fb_helper_prepare(rdev->ddev, &rfbdev->helper,
&radeon_fb_helper_funcs);
ret = drm_fb_helper_init(rdev->ddev, &rfbdev->helper,
RADEONFB_CONN_LIMIT);
ret = drm_fb_helper_init(rdev->ddev, &rfbdev->helper);
if (ret)
goto free;
ret = drm_fb_helper_single_add_all_connectors(&rfbdev->helper);
if (ret)
goto fini;
/* disable all the possible outputs/crtcs before entering KMS mode */
drm_helper_disable_unused_functions(rdev->ddev);
@ -404,15 +399,3 @@ bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj)
return true;
return false;
}
void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector)
{
if (rdev->mode_info.rfbdev)
drm_fb_helper_add_one_connector(&rdev->mode_info.rfbdev->helper, connector);
}
void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector)
{
if (rdev->mode_info.rfbdev)
drm_fb_helper_remove_one_connector(&rdev->mode_info.rfbdev->helper, connector);
}

View File

@ -986,9 +986,6 @@ bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj)
void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id);
void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector);
void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector);
void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id);
int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled);

View File

@ -124,7 +124,7 @@ int rockchip_drm_fbdev_init(struct drm_device *dev)
drm_fb_helper_prepare(dev, helper, &rockchip_drm_fb_helper_funcs);
ret = drm_fb_helper_init(dev, helper, ROCKCHIP_MAX_CONNECTOR);
ret = drm_fb_helper_init(dev, helper);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev,
"Failed to initialize drm fb helper - %d.\n",
@ -132,13 +132,6 @@ int rockchip_drm_fbdev_init(struct drm_device *dev)
return ret;
}
ret = drm_fb_helper_single_add_all_connectors(helper);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev,
"Failed to add connectors - %d.\n", ret);
goto err_drm_fb_helper_fini;
}
ret = drm_fb_helper_initial_config(helper, PREFERRED_BPP);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev,

View File

@ -314,19 +314,13 @@ static int tegra_fbdev_init(struct tegra_fbdev *fbdev,
struct drm_device *drm = fbdev->base.dev;
int err;
err = drm_fb_helper_init(drm, &fbdev->base, max_connectors);
err = drm_fb_helper_init(drm, &fbdev->base);
if (err < 0) {
dev_err(drm->dev, "failed to initialize DRM FB helper: %d\n",
err);
return err;
}
err = drm_fb_helper_single_add_all_connectors(&fbdev->base);
if (err < 0) {
dev_err(drm->dev, "failed to add connectors: %d\n", err);
goto fini;
}
err = drm_fb_helper_initial_config(&fbdev->base, preferred_bpp);
if (err < 0) {
dev_err(drm->dev, "failed to set initial configuration: %d\n",

View File

@ -17,6 +17,7 @@
#include "tidss_dispc.h"
#include "tidss_drv.h"
#include "tidss_irq.h"
#include "tidss_plane.h"
/* Page flip and frame done IRQs */
@ -111,6 +112,54 @@ static int tidss_crtc_atomic_check(struct drm_crtc *crtc,
return dispc_vp_bus_check(dispc, hw_videoport, state);
}
/*
* This needs all affected planes to be present in the atomic
* state. The untouched planes are added to the state in
* tidss_atomic_check().
*/
static void tidss_crtc_position_planes(struct tidss_device *tidss,
struct drm_crtc *crtc,
struct drm_crtc_state *old_state,
bool newmodeset)
{
struct drm_atomic_state *ostate = old_state->state;
struct tidss_crtc *tcrtc = to_tidss_crtc(crtc);
struct drm_crtc_state *cstate = crtc->state;
int layer;
if (!newmodeset && !cstate->zpos_changed &&
!to_tidss_crtc_state(cstate)->plane_pos_changed)
return;
for (layer = 0; layer < tidss->feat->num_planes; layer++) {
struct drm_plane_state *pstate;
struct drm_plane *plane;
bool layer_active = false;
int i;
for_each_new_plane_in_state(ostate, plane, pstate, i) {
if (pstate->crtc != crtc || !pstate->visible)
continue;
if (pstate->normalized_zpos == layer) {
layer_active = true;
break;
}
}
if (layer_active) {
struct tidss_plane *tplane = to_tidss_plane(plane);
dispc_ovr_set_plane(tidss->dispc, tplane->hw_plane_id,
tcrtc->hw_videoport,
pstate->crtc_x, pstate->crtc_y,
layer);
}
dispc_ovr_enable_layer(tidss->dispc, tcrtc->hw_videoport, layer,
layer_active);
}
}
static void tidss_crtc_atomic_flush(struct drm_crtc *crtc,
struct drm_crtc_state *old_crtc_state)
{
@ -146,6 +195,9 @@ static void tidss_crtc_atomic_flush(struct drm_crtc *crtc,
/* Write vp properties to HW if needed. */
dispc_vp_setup(tidss->dispc, tcrtc->hw_videoport, crtc->state, false);
/* Update plane positions if needed. */
tidss_crtc_position_planes(tidss, crtc, old_crtc_state, false);
WARN_ON(drm_crtc_vblank_get(crtc) != 0);
spin_lock_irqsave(&ddev->event_lock, flags);
@ -183,6 +235,7 @@ static void tidss_crtc_atomic_enable(struct drm_crtc *crtc,
return;
dispc_vp_setup(tidss->dispc, tcrtc->hw_videoport, crtc->state, true);
tidss_crtc_position_planes(tidss, crtc, old_state, true);
/* Turn vertical blanking interrupt reporting on. */
drm_crtc_vblank_on(crtc);
@ -318,6 +371,8 @@ static struct drm_crtc_state *tidss_crtc_duplicate_state(struct drm_crtc *crtc)
__drm_atomic_helper_crtc_duplicate_state(crtc, &state->base);
state->plane_pos_changed = false;
state->bus_format = current_state->bus_format;
state->bus_flags = current_state->bus_flags;

View File

@ -32,6 +32,8 @@ struct tidss_crtc_state {
/* Must be first. */
struct drm_crtc_state base;
bool plane_pos_changed;
u32 bus_format;
u32 bus_flags;
};

View File

@ -281,11 +281,6 @@ struct dss_vp_data {
u32 *gamma_table;
};
struct dss_plane_data {
u32 zorder;
u32 hw_videoport;
};
struct dispc_device {
struct tidss_device *tidss;
struct device *dev;
@ -307,8 +302,6 @@ struct dispc_device {
struct dss_vp_data vp_data[TIDSS_MAX_PORTS];
struct dss_plane_data plane_data[TIDSS_MAX_PLANES];
u32 *fourccs;
u32 num_fourccs;
@ -1235,7 +1228,7 @@ int dispc_vp_set_clk_rate(struct dispc_device *dispc, u32 hw_videoport,
if (dispc_pclk_diff(rate, new_rate) > 5)
dev_warn(dispc->dev,
"vp%d: Clock rate %lu differs over 5%% from requsted %lu\n",
"vp%d: Clock rate %lu differs over 5%% from requested %lu\n",
hw_videoport, new_rate, rate);
dev_dbg(dispc->dev, "vp%d: new rate %lu Hz (requested %lu Hz)\n",
@ -1247,7 +1240,7 @@ int dispc_vp_set_clk_rate(struct dispc_device *dispc, u32 hw_videoport,
/* OVR */
static void dispc_k2g_ovr_set_plane(struct dispc_device *dispc,
u32 hw_plane, u32 hw_videoport,
u32 x, u32 y, u32 zpos)
u32 x, u32 y, u32 layer)
{
/* On k2g there is only one plane and no need for ovr */
dispc_vid_write(dispc, hw_plane, DISPC_VID_K2G_POSITION,
@ -1256,44 +1249,43 @@ static void dispc_k2g_ovr_set_plane(struct dispc_device *dispc,
static void dispc_am65x_ovr_set_plane(struct dispc_device *dispc,
u32 hw_plane, u32 hw_videoport,
u32 x, u32 y, u32 zpos)
u32 x, u32 y, u32 layer)
{
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(zpos),
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(layer),
hw_plane, 4, 1);
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(zpos),
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(layer),
x, 17, 6);
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(zpos),
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(layer),
y, 30, 19);
}
static void dispc_j721e_ovr_set_plane(struct dispc_device *dispc,
u32 hw_plane, u32 hw_videoport,
u32 x, u32 y, u32 zpos)
u32 x, u32 y, u32 layer)
{
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(zpos),
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(layer),
hw_plane, 4, 1);
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES2(zpos),
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES2(layer),
x, 13, 0);
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES2(zpos),
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES2(layer),
y, 29, 16);
}
static void dispc_ovr_set_plane(struct dispc_device *dispc,
u32 hw_plane, u32 hw_videoport,
u32 x, u32 y, u32 zpos)
void dispc_ovr_set_plane(struct dispc_device *dispc, u32 hw_plane,
u32 hw_videoport, u32 x, u32 y, u32 layer)
{
switch (dispc->feat->subrev) {
case DISPC_K2G:
dispc_k2g_ovr_set_plane(dispc, hw_plane, hw_videoport,
x, y, zpos);
x, y, layer);
break;
case DISPC_AM65X:
dispc_am65x_ovr_set_plane(dispc, hw_plane, hw_videoport,
x, y, zpos);
x, y, layer);
break;
case DISPC_J721E:
dispc_j721e_ovr_set_plane(dispc, hw_plane, hw_videoport,
x, y, zpos);
x, y, layer);
break;
default:
WARN_ON(1);
@ -1301,10 +1293,13 @@ static void dispc_ovr_set_plane(struct dispc_device *dispc,
}
}
static void dispc_ovr_enable_plane(struct dispc_device *dispc,
u32 hw_videoport, u32 zpos, bool enable)
void dispc_ovr_enable_layer(struct dispc_device *dispc,
u32 hw_videoport, u32 layer, bool enable)
{
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(zpos),
if (dispc->feat->subrev == DISPC_K2G)
return;
OVR_REG_FLD_MOD(dispc, hw_videoport, DISPC_OVR_ATTRIBUTES(layer),
!!enable, 0, 0);
}
@ -1510,7 +1505,7 @@ struct dispc_csc_coef *dispc_find_csc(enum drm_color_encoding encoding,
static void dispc_vid_csc_setup(struct dispc_device *dispc, u32 hw_plane,
const struct drm_plane_state *state)
{
static const struct dispc_csc_coef *coef;
const struct dispc_csc_coef *coef;
coef = dispc_find_csc(state->color_encoding, state->color_range);
if (!coef) {
@ -1699,7 +1694,7 @@ static int dispc_vid_calc_scaling(struct dispc_device *dispc,
if (sp->xinc > f->xinc_max) {
dev_dbg(dispc->dev,
"%s: Too wide input bufer %u > %u\n", __func__,
"%s: Too wide input buffer %u > %u\n", __func__,
state->src_w >> 16, in_width_max * f->xinc_max);
return -EINVAL;
}
@ -2070,21 +2065,11 @@ int dispc_plane_setup(struct dispc_device *dispc, u32 hw_plane,
VID_REG_FLD_MOD(dispc, hw_plane, DISPC_VID_ATTRIBUTES, 0,
28, 28);
dispc_ovr_set_plane(dispc, hw_plane, hw_videoport,
state->crtc_x, state->crtc_y,
state->normalized_zpos);
dispc->plane_data[hw_plane].zorder = state->normalized_zpos;
dispc->plane_data[hw_plane].hw_videoport = hw_videoport;
return 0;
}
int dispc_plane_enable(struct dispc_device *dispc, u32 hw_plane, bool enable)
{
dispc_ovr_enable_plane(dispc, dispc->plane_data[hw_plane].hw_videoport,
dispc->plane_data[hw_plane].zorder, enable);
VID_REG_FLD_MOD(dispc, hw_plane, DISPC_VID_ATTRIBUTES, !!enable, 0, 0);
return 0;

View File

@ -94,6 +94,11 @@ extern const struct dispc_features dispc_j721e_feats;
void dispc_set_irqenable(struct dispc_device *dispc, dispc_irq_t mask);
dispc_irq_t dispc_read_and_clear_irqstatus(struct dispc_device *dispc);
void dispc_ovr_set_plane(struct dispc_device *dispc, u32 hw_plane,
u32 hw_videoport, u32 x, u32 y, u32 layer);
void dispc_ovr_enable_layer(struct dispc_device *dispc,
u32 hw_videoport, u32 layer, bool enable);
void dispc_vp_prepare(struct dispc_device *dispc, u32 hw_videoport,
const struct drm_crtc_state *state);
void dispc_vp_enable(struct dispc_device *dispc, u32 hw_videoport,

View File

@ -32,7 +32,7 @@ static int tidss_encoder_atomic_check(struct drm_encoder *encoder,
* bridge timings, or from the connector's display_info if no
* bridge defines the timings.
*/
list_for_each_entry(bridge, &encoder->bridge_chain, chain_node) {
drm_for_each_bridge_in_chain(encoder, bridge) {
if (!bridge->timings)
continue;

View File

@ -47,9 +47,59 @@ static const struct drm_mode_config_helper_funcs mode_config_helper_funcs = {
.atomic_commit_tail = tidss_atomic_commit_tail,
};
static int tidss_atomic_check(struct drm_device *ddev,
struct drm_atomic_state *state)
{
struct drm_plane_state *opstate;
struct drm_plane_state *npstate;
struct drm_plane *plane;
struct drm_crtc_state *cstate;
struct drm_crtc *crtc;
int ret, i;
ret = drm_atomic_helper_check(ddev, state);
if (ret)
return ret;
/*
* Add all active planes on a CRTC to the atomic state, if
* x/y/z position or activity of any plane on that CRTC
* changes. This is needed for updating the plane positions in
* tidss_crtc_position_planes() which is called from
* crtc_atomic_enable() and crtc_atomic_flush(). We have an
* extra flag to to mark x,y-position changes and together
* with zpos_changed the condition recognizes all the above
* cases.
*/
for_each_oldnew_plane_in_state(state, plane, opstate, npstate, i) {
if (!npstate->crtc || !npstate->visible)
continue;
if (!opstate->crtc || opstate->crtc_x != npstate->crtc_x ||
opstate->crtc_y != npstate->crtc_y) {
cstate = drm_atomic_get_crtc_state(state,
npstate->crtc);
if (IS_ERR(cstate))
return PTR_ERR(cstate);
to_tidss_crtc_state(cstate)->plane_pos_changed = true;
}
}
for_each_new_crtc_in_state(state, crtc, cstate, i) {
if (to_tidss_crtc_state(cstate)->plane_pos_changed ||
cstate->zpos_changed) {
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret)
return ret;
}
}
return 0;
}
static const struct drm_mode_config_funcs mode_config_funcs = {
.fb_create = drm_gem_fb_create,
.atomic_check = drm_atomic_helper_check,
.atomic_check = tidss_atomic_check,
.atomic_commit = drm_atomic_helper_commit,
};

View File

@ -1196,6 +1196,18 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
uint32_t new_flags;
dma_resv_assert_held(bo->base.resv);
/*
* Remove the backing store if no placement is given.
*/
if (!placement->num_placement && !placement->num_busy_placement) {
ret = ttm_bo_pipeline_gutting(bo);
if (ret)
return ret;
return ttm_tt_create(bo, false);
}
/*
* Check whether we need to move buffer.
*/

View File

@ -254,27 +254,42 @@ struct v3d_csd_job {
};
/**
* _wait_for - magic (register) wait macro
* __wait_for - magic wait macro
*
* Does the right thing for modeset paths when run under kdgb or similar atomic
* contexts. Note that it's important that we check the condition again after
* having timed out, since the timeout could be due to preemption or similar and
* we've never had a chance to check the condition before the timeout.
* Macro to help avoid open coding check/wait/timeout patterns. Note that it's
* important that we check the condition again after having timed out, since the
* timeout could be due to preemption or similar and we've never had a chance to
* check the condition before the timeout.
*/
#define wait_for(COND, MS) ({ \
unsigned long timeout__ = jiffies + msecs_to_jiffies(MS) + 1; \
int ret__ = 0; \
while (!(COND)) { \
if (time_after(jiffies, timeout__)) { \
if (!(COND)) \
ret__ = -ETIMEDOUT; \
#define __wait_for(OP, COND, US, Wmin, Wmax) ({ \
const ktime_t end__ = ktime_add_ns(ktime_get_raw(), 1000ll * (US)); \
long wait__ = (Wmin); /* recommended min for usleep is 10 us */ \
int ret__; \
might_sleep(); \
for (;;) { \
const bool expired__ = ktime_after(ktime_get_raw(), end__); \
OP; \
/* Guarantee COND check prior to timeout */ \
barrier(); \
if (COND) { \
ret__ = 0; \
break; \
} \
msleep(1); \
if (expired__) { \
ret__ = -ETIMEDOUT; \
break; \
} \
usleep_range(wait__, wait__ * 2); \
if (wait__ < (Wmax)) \
wait__ <<= 1; \
} \
ret__; \
})
#define _wait_for(COND, US, Wmin, Wmax) __wait_for(, (COND), (US), (Wmin), \
(Wmax))
#define wait_for(COND, MS) _wait_for((COND), (MS) * 1000, 10, 1000)
static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
{
/* nsecs_to_jiffies64() does not guard against overflow */

View File

@ -138,7 +138,7 @@ struct vbva_buffer {
u32 data_len;
/* variable size for the rest of the vbva_buffer area in VRAM. */
u8 data[0];
u8 data[];
} __packed;
#define VBVA_MAX_RECORD_SIZE (128 * 1024 * 1024)

View File

@ -65,7 +65,7 @@ struct vc4_perfmon {
* Note that counter values can't be reset, but you can fake a reset by
* destroying the perfmon and creating a new one.
*/
u64 counters[0];
u64 counters[];
};
struct vc4_dev {
@ -677,32 +677,41 @@ struct vc4_validated_shader_info {
};
/**
* _wait_for - magic (register) wait macro
* __wait_for - magic wait macro
*
* Does the right thing for modeset paths when run under kdgb or similar atomic
* contexts. Note that it's important that we check the condition again after
* having timed out, since the timeout could be due to preemption or similar and
* we've never had a chance to check the condition before the timeout.
* Macro to help avoid open coding check/wait/timeout patterns. Note that it's
* important that we check the condition again after having timed out, since the
* timeout could be due to preemption or similar and we've never had a chance to
* check the condition before the timeout.
*/
#define _wait_for(COND, MS, W) ({ \
unsigned long timeout__ = jiffies + msecs_to_jiffies(MS) + 1; \
int ret__ = 0; \
while (!(COND)) { \
if (time_after(jiffies, timeout__)) { \
if (!(COND)) \
ret__ = -ETIMEDOUT; \
#define __wait_for(OP, COND, US, Wmin, Wmax) ({ \
const ktime_t end__ = ktime_add_ns(ktime_get_raw(), 1000ll * (US)); \
long wait__ = (Wmin); /* recommended min for usleep is 10 us */ \
int ret__; \
might_sleep(); \
for (;;) { \
const bool expired__ = ktime_after(ktime_get_raw(), end__); \
OP; \
/* Guarantee COND check prior to timeout */ \
barrier(); \
if (COND) { \
ret__ = 0; \
break; \
} \
if (W && drm_can_sleep()) { \
msleep(W); \
} else { \
cpu_relax(); \
if (expired__) { \
ret__ = -ETIMEDOUT; \
break; \
} \
usleep_range(wait__, wait__ * 2); \
if (wait__ < (Wmax)) \
wait__ <<= 1; \
} \
ret__; \
})
#define wait_for(COND, MS) _wait_for(COND, MS, 1)
#define _wait_for(COND, US, Wmin, Wmax) __wait_for(, (COND), (US), (Wmin), \
(Wmax))
#define wait_for(COND, MS) _wait_for((COND), (MS) * 1000, 10, 1000)
/* vc4_bo.c */
struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size);

View File

@ -69,16 +69,21 @@ struct virtio_gpu_object_params {
struct virtio_gpu_object {
struct drm_gem_shmem_object base;
uint32_t hw_res_handle;
struct sg_table *pages;
uint32_t mapped;
bool dumb;
bool created;
};
#define gem_to_virtio_gpu_obj(gobj) \
container_of((gobj), struct virtio_gpu_object, base.base)
struct virtio_gpu_object_shmem {
struct virtio_gpu_object base;
struct sg_table *pages;
uint32_t mapped;
};
#define to_virtio_gpu_shmem(virtio_gpu_object) \
container_of((virtio_gpu_object), struct virtio_gpu_object_shmem, base)
struct virtio_gpu_object_array {
struct ww_acquire_ctx ticket;
struct list_head next;
@ -366,7 +371,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object **bo_ptr,
struct virtio_gpu_fence *fence);
bool virtio_gpu_is_shmem(struct drm_gem_object *obj);
bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
/* virtgpu_prime.c */
struct drm_gem_object *virtgpu_gem_prime_import_sg_table(

View File

@ -66,19 +66,25 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
{
struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
if (bo->pages) {
if (bo->mapped) {
dma_unmap_sg(vgdev->vdev->dev.parent,
bo->pages->sgl, bo->mapped,
DMA_TO_DEVICE);
bo->mapped = 0;
}
sg_free_table(bo->pages);
bo->pages = NULL;
drm_gem_shmem_unpin(&bo->base.base);
}
virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
drm_gem_shmem_free_object(&bo->base.base);
if (virtio_gpu_is_shmem(bo)) {
struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
if (shmem->pages) {
if (shmem->mapped) {
dma_unmap_sg(vgdev->vdev->dev.parent,
shmem->pages->sgl, shmem->mapped,
DMA_TO_DEVICE);
shmem->mapped = 0;
}
sg_free_table(shmem->pages);
shmem->pages = NULL;
drm_gem_shmem_unpin(&bo->base.base);
}
drm_gem_shmem_free_object(&bo->base.base);
}
}
static void virtio_gpu_free_object(struct drm_gem_object *obj)
@ -109,9 +115,9 @@ static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
.mmap = drm_gem_shmem_mmap,
};
bool virtio_gpu_is_shmem(struct drm_gem_object *obj)
bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
{
return obj->funcs == &virtio_gpu_shmem_funcs;
return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
}
struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
@ -134,6 +140,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
unsigned int *nents)
{
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
struct scatterlist *sg;
int si, ret;
@ -141,19 +148,20 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
if (ret < 0)
return -EINVAL;
bo->pages = drm_gem_shmem_get_sg_table(&bo->base.base);
if (!bo->pages) {
shmem->pages = drm_gem_shmem_get_sg_table(&bo->base.base);
if (!shmem->pages) {
drm_gem_shmem_unpin(&bo->base.base);
return -EINVAL;
}
if (use_dma_api) {
bo->mapped = dma_map_sg(vgdev->vdev->dev.parent,
bo->pages->sgl, bo->pages->nents,
DMA_TO_DEVICE);
*nents = bo->mapped;
shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent,
shmem->pages->sgl,
shmem->pages->nents,
DMA_TO_DEVICE);
*nents = shmem->mapped;
} else {
*nents = bo->pages->nents;
*nents = shmem->pages->nents;
}
*ents = kmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry),
@ -163,7 +171,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
return -ENOMEM;
}
for_each_sg(bo->pages->sgl, sg, *nents, si) {
for_each_sg(shmem->pages->sgl, sg, *nents, si) {
(*ents)[si].addr = cpu_to_le64(use_dma_api
? sg_dma_address(sg)
: sg_phys(sg));

View File

@ -600,10 +600,11 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
struct virtio_gpu_transfer_to_host_2d *cmd_p;
struct virtio_gpu_vbuffer *vbuf;
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
if (use_dma_api)
dma_sync_sg_for_device(vgdev->vdev->dev.parent,
bo->pages->sgl, bo->pages->nents,
shmem->pages->sgl, shmem->pages->nents,
DMA_TO_DEVICE);
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
@ -1015,10 +1016,11 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
struct virtio_gpu_transfer_host_3d *cmd_p;
struct virtio_gpu_vbuffer *vbuf;
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
if (use_dma_api)
dma_sync_sg_for_device(vgdev->vdev->dev.parent,
bo->pages->sgl, bo->pages->nents,
shmem->pages->sgl, shmem->pages->nents,
DMA_TO_DEVICE);
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));

View File

@ -435,7 +435,7 @@ config FB_FM2
config FB_ARC
tristate "Arc Monochrome LCD board support"
depends on FB && X86
depends on FB && (X86 || COMPILE_TEST)
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
@ -1639,7 +1639,7 @@ config FB_VT8500
config FB_WM8505
bool "Wondermedia WM8xxx-series frame buffer support"
depends on (FB = y) && ARM && ARCH_VT8500
depends on (FB = y) && HAS_IOMEM && (ARCH_VT8500 || COMPILE_TEST)
select FB_SYS_FILLRECT if (!FB_WMT_GE_ROPS)
select FB_SYS_COPYAREA if (!FB_WMT_GE_ROPS)
select FB_SYS_IMAGEBLIT
@ -1827,7 +1827,7 @@ config FB_FSL_DIU
config FB_W100
tristate "W100 frame buffer support"
depends on FB && ARCH_PXA
depends on FB && HAS_IOMEM && (ARCH_PXA || COMPILE_TEST)
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
@ -1844,7 +1844,8 @@ config FB_W100
config FB_SH_MOBILE_LCDC
tristate "SuperH Mobile LCDC framebuffer support"
depends on FB && (SUPERH || ARCH_RENESAS) && HAVE_CLK
depends on FB && HAVE_CLK && HAS_IOMEM
depends on SUPERH || ARCH_RENESAS || COMPILE_TEST
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT

View File

@ -618,14 +618,13 @@ static int aty_var_to_pll_8398(const struct fb_info *info, u32 vclk_per,
u32 mhz100; /* in 0.01 MHz */
u32 program_bits;
/* u32 post_divider; */
u32 mach64MinFreq, mach64MaxFreq, mach64RefFreq;
u32 mach64MinFreq, mach64MaxFreq;
u16 m, n, k = 0, save_m, save_n, twoToKth;
/* Calculate the programming word */
mhz100 = 100000000 / vclk_per;
mach64MinFreq = MIN_FREQ_2595;
mach64MaxFreq = MAX_FREQ_2595;
mach64RefFreq = REF_FREQ_2595; /* 14.32 MHz */
save_m = 0;
save_n = 0;

View File

@ -849,12 +849,6 @@ static int radeonfb_check_var (struct fb_var_screeninfo *var, struct fb_info *in
case 9 ... 16:
v.bits_per_pixel = 16;
break;
case 17 ... 24:
#if 0 /* Doesn't seem to work */
v.bits_per_pixel = 24;
break;
#endif
return -EINVAL;
case 25 ... 32:
v.bits_per_pixel = 32;
break;
@ -1650,14 +1644,14 @@ static int radeonfb_set_par(struct fb_info *info)
struct fb_var_screeninfo *mode = &info->var;
struct radeon_regs *newmode;
int hTotal, vTotal, hSyncStart, hSyncEnd,
hSyncPol, vSyncStart, vSyncEnd, vSyncPol, cSync;
vSyncStart, vSyncEnd;
u8 hsync_adj_tab[] = {0, 0x12, 9, 9, 6, 5};
u8 hsync_fudge_fp[] = {2, 2, 0, 0, 5, 5};
u32 sync, h_sync_pol, v_sync_pol, dotClock, pixClock;
int i, freq;
int format = 0;
int nopllcalc = 0;
int hsync_start, hsync_fudge, bytpp, hsync_wid, vsync_wid;
int hsync_start, hsync_fudge, hsync_wid, vsync_wid;
int primary_mon = PRIMARY_MONITOR(rinfo);
int depth = var_to_depth(mode);
int use_rmx = 0;
@ -1730,13 +1724,7 @@ static int radeonfb_set_par(struct fb_info *info)
else if (vsync_wid > 0x1f) /* max */
vsync_wid = 0x1f;
hSyncPol = mode->sync & FB_SYNC_HOR_HIGH_ACT ? 0 : 1;
vSyncPol = mode->sync & FB_SYNC_VERT_HIGH_ACT ? 0 : 1;
cSync = mode->sync & FB_SYNC_COMP_HIGH_ACT ? (1 << 4) : 0;
format = radeon_get_dstbpp(depth);
bytpp = mode->bits_per_pixel >> 3;
if ((primary_mon == MT_DFP) || (primary_mon == MT_LCD))
hsync_fudge = hsync_fudge_fp[format-1];
@ -2548,16 +2536,6 @@ static void radeonfb_pci_unregister(struct pci_dev *pdev)
if (rinfo->mon2_EDID)
sysfs_remove_bin_file(&rinfo->pdev->dev.kobj, &edid2_attr);
#if 0
/* restore original state
*
* Doesn't quite work yet, I suspect if we come from a legacy
* VGA mode (or worse, text mode), we need to do some VGA black
* magic here that I know nothing about. --BenH
*/
radeon_write_mode (rinfo, &rinfo->init_state, 1);
#endif
del_timer_sync(&rinfo->lvds_timer);
arch_phys_wc_del(rinfo->wc_cookie);
unregister_framebuffer(info);

View File

@ -331,7 +331,7 @@ int SetOverlayViewPort(volatile STG4000REG __iomem *pSTGReg,
u32 ulScale;
u32 ulLeft, ulRight;
u32 ulSrcLeft, ulSrcRight;
u32 ulScaleLeft, ulScaleRight;
u32 ulScaleLeft;
u32 ulhDecim;
u32 ulsVal;
u32 ulVertDecFactor;
@ -470,7 +470,6 @@ int SetOverlayViewPort(volatile STG4000REG __iomem *pSTGReg,
* round down the pixel pos to the nearest 8 pixels.
*/
ulScaleLeft = ulSrcLeft;
ulScaleRight = ulSrcRight;
/* shift fxscale until it is in the range of the scaler */
ulhDecim = 0;

View File

@ -1376,6 +1376,12 @@ static struct video_board vbG200 = {
.accelID = FB_ACCEL_MATROX_MGAG200,
.lowlevel = &matrox_G100
};
static struct video_board vbG200eW = {
.maxvram = 0x800000,
.maxdisplayable = 0x800000,
.accelID = FB_ACCEL_MATROX_MGAG200,
.lowlevel = &matrox_G100
};
/* from doc it looks like that accelerator can draw only to low 16MB :-( Direct accesses & displaying are OK for
whole 32MB */
static struct video_board vbG400 = {
@ -1494,6 +1500,13 @@ static struct board {
MGA_G200,
&vbG200,
"MGA-G200 (PCI)"},
{PCI_VENDOR_ID_MATROX, 0x0532, 0xFF,
0, 0,
DEVF_G200,
250000,
MGA_G200,
&vbG200eW,
"MGA-G200eW (PCI)"},
{PCI_VENDOR_ID_MATROX, PCI_DEVICE_ID_MATROX_G200_AGP, 0xFF,
PCI_SS_VENDOR_ID_MATROX, PCI_SS_ID_MATROX_GENERIC,
DEVF_G200,
@ -2136,6 +2149,8 @@ static const struct pci_device_id matroxfb_devices[] = {
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_MATROX, PCI_DEVICE_ID_MATROX_G200_PCI,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_MATROX, 0x0532,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_MATROX, PCI_DEVICE_ID_MATROX_G200_AGP,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{PCI_VENDOR_ID_MATROX, PCI_DEVICE_ID_MATROX_G400,

View File

@ -1406,7 +1406,7 @@ struct mmphw_ctrl {
/*pathes*/
int path_num;
struct mmphw_path_plat path_plats[0];
struct mmphw_path_plat path_plats[];
};
static inline int overlay_is_vid(struct mmp_overlay *overlay)

View File

@ -779,7 +779,6 @@ static int pxa168fb_remove(struct platform_device *pdev)
{
struct pxa168fb_info *fbi = platform_get_drvdata(pdev);
struct fb_info *info;
int irq;
unsigned int data;
if (!fbi)
@ -799,8 +798,6 @@ static int pxa168fb_remove(struct platform_device *pdev)
if (info->cmap.len)
fb_dealloc_cmap(&info->cmap);
irq = platform_get_irq(pdev, 0);
dma_free_wc(fbi->dev, info->fix.smem_len,
info->screen_base, info->fix.smem_start);

View File

@ -1572,7 +1572,7 @@ sh_mobile_lcdc_overlay_fb_init(struct sh_mobile_lcdc_overlay *ovl)
info->flags = FBINFO_FLAG_DEFAULT;
info->fbops = &sh_mobile_lcdc_overlay_ops;
info->device = priv->dev;
info->screen_base = ovl->fb_mem;
info->screen_buffer = ovl->fb_mem;
info->par = ovl;
/* Initialize fixed screen information. Restrict pan to 2 lines steps
@ -2056,7 +2056,7 @@ sh_mobile_lcdc_channel_fb_init(struct sh_mobile_lcdc_chan *ch,
info->flags = FBINFO_FLAG_DEFAULT;
info->fbops = &sh_mobile_lcdc_ops;
info->device = priv->dev;
info->screen_base = ch->fb_mem;
info->screen_buffer = ch->fb_mem;
info->pseudo_palette = &ch->pseudo_palette;
info->par = ch;

View File

@ -89,7 +89,7 @@ struct ssd1307fb_par {
struct ssd1307fb_array {
u8 type;
u8 data[0];
u8 data[];
};
static const struct fb_fix_screeninfo ssd1307fb_fix = {

View File

@ -61,9 +61,9 @@ struct w100_pll_info *w100_get_xtal_table(unsigned int freq);
#define BITS_PER_PIXEL 16
/* Remapped addresses for base cfg, memmapped regs and the frame buffer itself */
static void *remapped_base;
static void *remapped_regs;
static void *remapped_fbuf;
static void __iomem *remapped_base;
static void __iomem *remapped_regs;
static void __iomem *remapped_fbuf;
#define REMAPPED_FB_LEN 0x15ffff
@ -635,7 +635,7 @@ static int w100fb_resume(struct platform_device *dev)
#endif
int w100fb_probe(struct platform_device *pdev)
static int w100fb_probe(struct platform_device *pdev)
{
int err = -EIO;
struct w100fb_mach_info *inf;
@ -807,10 +807,11 @@ static int w100fb_remove(struct platform_device *pdev)
static void w100_soft_reset(void)
{
u16 val = readw((u16 *) remapped_base + cfgSTATUS);
writew(val | 0x08, (u16 *) remapped_base + cfgSTATUS);
u16 val = readw((u16 __iomem *)remapped_base + cfgSTATUS);
writew(val | 0x08, (u16 __iomem *)remapped_base + cfgSTATUS);
udelay(100);
writew(0x00, (u16 *) remapped_base + cfgSTATUS);
writew(0x00, (u16 __iomem *)remapped_base + cfgSTATUS);
udelay(100);
}
@ -1022,7 +1023,8 @@ struct w100_pll_info *w100_get_xtal_table(unsigned int freq)
return pll_entry->pll_table;
pll_entry++;
} while (pll_entry->xtal_freq);
return 0;
return NULL;
}

View File

@ -339,7 +339,7 @@ static int wm8505fb_probe(struct platform_device *pdev)
fbi->fb.fix.smem_start = fb_mem_phys;
fbi->fb.fix.smem_len = fb_mem_len;
fbi->fb.screen_base = fb_mem_virt;
fbi->fb.screen_buffer = fb_mem_virt;
fbi->fb.screen_size = fb_mem_len;
fbi->contrast = 0x10;

View File

@ -327,13 +327,13 @@ struct mhl_burst_bits_per_pixel_fmt {
struct {
u8 stream_id;
u8 pixel_format;
} __packed desc[0];
} __packed desc[];
} __packed;
struct mhl_burst_emsc_support {
struct mhl3_burst_header hdr;
u8 num_entries;
__be16 burst_id[0];
__be16 burst_id[];
} __packed;
struct mhl_burst_audio_descr {

View File

@ -213,8 +213,7 @@ drm_fb_helper_from_client(struct drm_client_dev *client)
#ifdef CONFIG_DRM_FBDEV_EMULATION
void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
const struct drm_fb_helper_funcs *funcs);
int drm_fb_helper_init(struct drm_device *dev,
struct drm_fb_helper *helper, int max_conn);
int drm_fb_helper_init(struct drm_device *dev, struct drm_fb_helper *helper);
void drm_fb_helper_fini(struct drm_fb_helper *helper);
int drm_fb_helper_blank(int blank, struct fb_info *info);
int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
@ -279,8 +278,7 @@ static inline void drm_fb_helper_prepare(struct drm_device *dev,
}
static inline int drm_fb_helper_init(struct drm_device *dev,
struct drm_fb_helper *helper,
int max_conn)
struct drm_fb_helper *helper)
{
/* So drivers can use it to free the struct */
helper->dev = dev;
@ -453,27 +451,6 @@ drm_fbdev_generic_setup(struct drm_device *dev, unsigned int preferred_bpp)
#endif
/* TODO: There's a todo entry to remove these three */
static inline int
drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
{
return 0;
}
static inline int
drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper,
struct drm_connector *connector)
{
return 0;
}
static inline int
drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
struct drm_connector *connector)
{
return 0;
}
/**
* drm_fb_helper_remove_conflicting_framebuffers - remove firmware-configured framebuffers
* @a: memory range, users of which are to be removed

View File

@ -276,7 +276,7 @@ void drm_hdcp_cpu_to_be24(u8 seq_num[HDCP_2_2_SEQ_NUM_LEN], u32 val)
#define DRM_HDCP_2_VRL_LENGTH_SIZE 3
#define DRM_HDCP_2_DCP_SIG_SIZE 384
#define DRM_HDCP_2_NO_OF_DEV_PLUS_RESERVED_SZ 4
#define DRM_HDCP_2_KSV_COUNT_2_LSBITS(byte) (((byte) & 0xC) >> 6)
#define DRM_HDCP_2_KSV_COUNT_2_LSBITS(byte) (((byte) & 0xC0) >> 6)
struct hdcp_srm_header {
u8 srm_id;
@ -288,8 +288,8 @@ struct hdcp_srm_header {
struct drm_device;
struct drm_connector;
bool drm_hdcp_check_ksvs_revoked(struct drm_device *dev,
u8 *ksvs, u32 ksv_count);
int drm_hdcp_check_ksvs_revoked(struct drm_device *dev,
u8 *ksvs, u32 ksv_count);
int drm_connector_attach_content_protection_property(
struct drm_connector *connector, bool hdcp_content_type);
void drm_hdcp_update_content_protection(struct drm_connector *connector,

View File

@ -45,10 +45,6 @@ struct drm_dma_handle *drm_pci_alloc(struct drm_device *dev, size_t size,
size_t align);
void drm_pci_free(struct drm_device *dev, struct drm_dma_handle * dmah);
int drm_get_pci_dev(struct pci_dev *pdev,
const struct pci_device_id *ent,
struct drm_driver *driver);
#else
static inline struct drm_dma_handle *drm_pci_alloc(struct drm_device *dev,
@ -62,13 +58,6 @@ static inline void drm_pci_free(struct drm_device *dev,
{
}
static inline int drm_get_pci_dev(struct pci_dev *pdev,
const struct pci_device_id *ent,
struct drm_driver *driver)
{
return -ENOSYS;
}
#endif
#endif /* _DRM_PCI_H_ */

View File

@ -181,4 +181,8 @@ int drm_simple_display_pipe_init(struct drm_device *dev,
const uint64_t *format_modifiers,
struct drm_connector *connector);
int drm_simple_encoder_init(struct drm_device *dev,
struct drm_encoder *encoder,
int encoder_type);
#endif /* __LINUX_DRM_SIMPLE_KMS_HELPER_H */

View File

@ -42,18 +42,6 @@ struct dma_buf_ops {
*/
bool cache_sgt_mapping;
/**
* @dynamic_mapping:
*
* If true the framework makes sure that the map/unmap_dma_buf
* callbacks are always called with the dma_resv object locked.
*
* If false the framework makes sure that the map/unmap_dma_buf
* callbacks are always called without the dma_resv object locked.
* Mutual exclusive with @cache_sgt_mapping.
*/
bool dynamic_mapping;
/**
* @attach:
*
@ -93,14 +81,43 @@ struct dma_buf_ops {
*/
void (*detach)(struct dma_buf *, struct dma_buf_attachment *);
/**
* @pin:
*
* This is called by dma_buf_pin and lets the exporter know that the
* DMA-buf can't be moved any more.
*
* This is called with the dmabuf->resv object locked and is mutual
* exclusive with @cache_sgt_mapping.
*
* This callback is optional and should only be used in limited use
* cases like scanout and not for temporary pin operations.
*
* Returns:
*
* 0 on success, negative error code on failure.
*/
int (*pin)(struct dma_buf_attachment *attach);
/**
* @unpin:
*
* This is called by dma_buf_unpin and lets the exporter know that the
* DMA-buf can be moved again.
*
* This is called with the dmabuf->resv object locked and is mutual
* exclusive with @cache_sgt_mapping.
*
* This callback is optional.
*/
void (*unpin)(struct dma_buf_attachment *attach);
/**
* @map_dma_buf:
*
* This is called by dma_buf_map_attachment() and is used to map a
* shared &dma_buf into device address space, and it is mandatory. It
* can only be called if @attach has been called successfully. This
* essentially pins the DMA buffer into place, and it cannot be moved
* any more
* can only be called if @attach has been called successfully.
*
* This call may sleep, e.g. when the backing storage first needs to be
* allocated, or moved to a location suitable for all currently attached
@ -141,9 +158,8 @@ struct dma_buf_ops {
*
* This is called by dma_buf_unmap_attachment() and should unmap and
* release the &sg_table allocated in @map_dma_buf, and it is mandatory.
* It should also unpin the backing storage if this is the last mapping
* of the DMA buffer, it the exporter supports backing storage
* migration.
* For static dma_buf handling this might also unpins the backing
* storage if this is the last mapping of the DMA buffer.
*/
void (*unmap_dma_buf)(struct dma_buf_attachment *,
struct sg_table *,
@ -311,6 +327,34 @@ struct dma_buf {
} cb_excl, cb_shared;
};
/**
* struct dma_buf_attach_ops - importer operations for an attachment
* @move_notify: [optional] notification that the DMA-buf is moving
*
* Attachment operations implemented by the importer.
*/
struct dma_buf_attach_ops {
/**
* @move_notify
*
* If this callback is provided the framework can avoid pinning the
* backing store while mappings exists.
*
* This callback is called with the lock of the reservation object
* associated with the dma_buf held and the mapping function must be
* called with this lock held as well. This makes sure that no mapping
* is created concurrently with an ongoing move operation.
*
* Mappings stay valid and are not directly affected by this callback.
* But the DMA-buf can now be in a different physical location, so all
* mappings should be destroyed and re-created as soon as possible.
*
* New mappings can be created after this callback returns, and will
* point to the new location of the DMA-buf.
*/
void (*move_notify)(struct dma_buf_attachment *attach);
};
/**
* struct dma_buf_attachment - holds device-buffer attachment data
* @dmabuf: buffer for this attachment.
@ -319,8 +363,9 @@ struct dma_buf {
* @sgt: cached mapping.
* @dir: direction of cached mapping.
* @priv: exporter specific attachment data.
* @dynamic_mapping: true if dma_buf_map/unmap_attachment() is called with the
* dma_resv lock held.
* @importer_ops: importer operations for this attachment, if provided
* dma_buf_map/unmap_attachment() must be called with the dma_resv lock held.
* @importer_priv: importer specific attachment data.
*
* This structure holds the attachment information between the dma_buf buffer
* and its user device(s). The list contains one attachment struct per device
@ -337,7 +382,8 @@ struct dma_buf_attachment {
struct list_head node;
struct sg_table *sgt;
enum dma_data_direction dir;
bool dynamic_mapping;
const struct dma_buf_attach_ops *importer_ops;
void *importer_priv;
void *priv;
};
@ -399,7 +445,7 @@ static inline void get_dma_buf(struct dma_buf *dmabuf)
*/
static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf)
{
return dmabuf->ops->dynamic_mapping;
return !!dmabuf->ops->pin;
}
/**
@ -413,16 +459,19 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf)
static inline bool
dma_buf_attachment_is_dynamic(struct dma_buf_attachment *attach)
{
return attach->dynamic_mapping;
return !!attach->importer_ops;
}
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev);
struct dma_buf_attachment *
dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
bool dynamic_mapping);
const struct dma_buf_attach_ops *importer_ops,
void *importer_priv);
void dma_buf_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attach);
int dma_buf_pin(struct dma_buf_attachment *attach);
void dma_buf_unpin(struct dma_buf_attachment *attach);
struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info);

View File

@ -10,7 +10,7 @@
#include <drm/drm_fourcc.h>
#include <linux/fb.h>
#include <linux/kernel.h>
#include <linux/types.h>
/* format array, use it to initialize a "struct simplefb_format" array */
#define SIMPLEFB_FORMATS \

View File

@ -231,7 +231,7 @@ struct mmp_path {
/* layers */
int overlay_num;
struct mmp_overlay overlays[0];
struct mmp_overlay overlays[];
};
extern struct mmp_path *mmp_get_path(const char *name);