forked from Minki/linux
Cross-subsystem Changes:
- fbdev/efifb: Release PCI device's runtime PM ref during FB destr\ oy (Imre) i915 Core Driver Changes: - Only access SFC_DONE in media when not fused off for graphics 12 and newer. - Double Memory latency values from pcode for DG2 (Matt Roper) - ADL-S PCI ID update (Tejas) - New DG1 PCI ID (Jose) - Fix regression with uncore refactoring (Dave) i915 Display Changes: - ADL-P display (XE_LPD) fixes and updates (Ankit, Jani, Matt Roper, Anusham, Jose, Imre, Vandita) - DG2 display fixes (Ankit, Jani) - Expand PCH_CNP tweaked display workaround to all newer displays (Anshuman) - General display simplifications and clean-ups (Jani, Swati, Jose, Ville) - PSR Clean-ups, dropping support for BDW/HSD and enable PSR2 selective fetch by default (Jose, Gwan-gyeong) - Nuke ORIGIN_GTT (Jose) - Return proper DPRX link training result (Lee) - FBC related refactor and fixes (Ville) - Yet another attempt to solve the fast+narrow vs slow+wide eDP link training (Kai-Heng) - DP 2.0 preparation work (Jani) - Silence __iomem sparse warn (Ville) - Clean up DPLL stuff (Ville) - Fix various dp/edp max rates (Matt Atwood, Animesh, Jani) - Remove VBT ddi_port_info caching (Jani) - DSI driver improvements (Lee) - HDCP fixes (Juston) - Associate ACPI connector nodes with connector entries (Heikki) - Add support for out-of-bound hotplug events (Hans) - VESA vendor block and drm/i915 MSO use of it (Jani) - Fixes for bigjoiner (Ville) - Update memory bandwidth parameters (RK) - DMC related fixes (Chris, Jose) - HDR related fixes and improvements (Tejas) - g4x/vlv/chv CxSR/wm fixes/cleanups (Ville) - Use BIOS provided value for RKL Audio's HDA link (Kai-Heng) - Fix the dsc check while selecting min_cdclk (Vandita) - Split and constify vtable (Dave) - Add ww context to intel_dpt_pin (Maarten) - Fix bdb version check (Lukasz) - DP per-lane drive settings prep work and other DP fixes (Ville) -----BEGIN PGP SIGNATURE----- iQEzBAABCAAdFiEEbSBwaO7dZQkcLOKj+mJfZA7rE8oFAmFbTsgACgkQ+mJfZA7r E8onwQf/QLOtlY0Al7oVEQrWu7I6vzajyQz6oGbj3rTNMG6m/sDNCgKZaNmxRnQo O+YjccMW/dnsFhSqckm8JzQUrHPSmDExBu+HGSkxNTod7UYVEEd0MYETnBShF4hv vwoPgRuzny5+jEDuBv9GBD/6d2su9Hl/OWTQ/YPjdJKyIWLjKhbxET7FExlV6MtG I80OyJSaV2L4EAgPT5TBSAXvzeUFAevw2O9aOSI+rTtAOfXI7TkB7E38Tjzc7uSd YN9/43NXoJYVB9yhVXKJgK90COMSREFv/lUupSXqN/W5lOD4xLg0W6vD6kKX5Z4T IrfvR6T5Cz+OfwSLUlzmWeO+l9VcqQ== =9tCx -----END PGP SIGNATURE----- Merge tag 'drm-intel-next-2021-10-04' of git://anongit.freedesktop.org/drm/drm-intel into drm-next Cross-subsystem Changes: - fbdev/efifb: Release PCI device's runtime PM ref during FB destr\ oy (Imre) i915 Core Driver Changes: - Only access SFC_DONE in media when not fused off for graphics 12 and newer. - Double Memory latency values from pcode for DG2 (Matt Roper) - ADL-S PCI ID update (Tejas) - New DG1 PCI ID (Jose) - Fix regression with uncore refactoring (Dave) i915 Display Changes: - ADL-P display (XE_LPD) fixes and updates (Ankit, Jani, Matt Roper, Anusham, Jose, Imre, Vandita) - DG2 display fixes (Ankit, Jani) - Expand PCH_CNP tweaked display workaround to all newer displays (Anshuman) - General display simplifications and clean-ups (Jani, Swati, Jose, Ville) - PSR Clean-ups, dropping support for BDW/HSD and enable PSR2 selective fetch by default (Jose, Gwan-gyeong) - Nuke ORIGIN_GTT (Jose) - Return proper DPRX link training result (Lee) - FBC related refactor and fixes (Ville) - Yet another attempt to solve the fast+narrow vs slow+wide eDP link training (Kai-Heng) - DP 2.0 preparation work (Jani) - Silence __iomem sparse warn (Ville) - Clean up DPLL stuff (Ville) - Fix various dp/edp max rates (Matt Atwood, Animesh, Jani) - Remove VBT ddi_port_info caching (Jani) - DSI driver improvements (Lee) - HDCP fixes (Juston) - Associate ACPI connector nodes with connector entries (Heikki) - Add support for out-of-bound hotplug events (Hans) - VESA vendor block and drm/i915 MSO use of it (Jani) - Fixes for bigjoiner (Ville) - Update memory bandwidth parameters (RK) - DMC related fixes (Chris, Jose) - HDR related fixes and improvements (Tejas) - g4x/vlv/chv CxSR/wm fixes/cleanups (Ville) - Use BIOS provided value for RKL Audio's HDA link (Kai-Heng) - Fix the dsc check while selecting min_cdclk (Vandita) - Split and constify vtable (Dave) - Add ww context to intel_dpt_pin (Maarten) - Fix bdb version check (Lukasz) - DP per-lane drive settings prep work and other DP fixes (Ville) Signed-off-by: Dave Airlie <airlied@redhat.com> # gpg: Signature made Tue 05 Oct 2021 04:58:16 AEST # gpg: using RSA key 6D207068EEDD65091C2CE2A3FA625F640EEB13CA # gpg: Good signature from "Rodrigo Vivi <rodrigo.vivi@intel.com>" [unknown] # gpg: aka "Rodrigo Vivi <rodrigo.vivi@gmail.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 6D20 7068 EEDD 6509 1C2C E2A3 FA62 5F64 0EEB 13CA From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/YVtPk6llsxBFiw7W@intel.com
This commit is contained in:
commit
c7c774fe09
@ -183,26 +183,23 @@ Frame Buffer Compression (FBC)
|
||||
Display Refresh Rate Switching (DRRS)
|
||||
-------------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_drrs.c
|
||||
:doc: Display Refresh Rate Switching (DRRS)
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
:functions: intel_dp_set_drrs_state
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_drrs.c
|
||||
:functions: intel_drrs_enable
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
:functions: intel_edp_drrs_enable
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_drrs.c
|
||||
:functions: intel_drrs_disable
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
:functions: intel_edp_drrs_disable
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_drrs.c
|
||||
:functions: intel_drrs_invalidate
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
:functions: intel_edp_drrs_invalidate
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_drrs.c
|
||||
:functions: intel_drrs_flush
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
:functions: intel_edp_drrs_flush
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dp.c
|
||||
:functions: intel_dp_drrs_init
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_drrs.c
|
||||
:functions: intel_drrs_init
|
||||
|
||||
DPIO
|
||||
----
|
||||
|
@ -130,6 +130,20 @@ u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SI
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_get_adjust_request_pre_emphasis);
|
||||
|
||||
/* DP 2.0 128b/132b */
|
||||
u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
int lane)
|
||||
{
|
||||
int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);
|
||||
int s = ((lane & 1) ?
|
||||
DP_ADJUST_TX_FFE_PRESET_LANE1_SHIFT :
|
||||
DP_ADJUST_TX_FFE_PRESET_LANE0_SHIFT);
|
||||
u8 l = dp_link_status(link_status, i);
|
||||
|
||||
return (l >> s) & 0xf;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_get_adjust_tx_ffe_preset);
|
||||
|
||||
u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
unsigned int lane)
|
||||
{
|
||||
@ -207,15 +221,33 @@ EXPORT_SYMBOL(drm_dp_lttpr_link_train_channel_eq_delay);
|
||||
|
||||
u8 drm_dp_link_rate_to_bw_code(int link_rate)
|
||||
{
|
||||
/* Spec says link_bw = link_rate / 0.27Gbps */
|
||||
return link_rate / 27000;
|
||||
switch (link_rate) {
|
||||
case 1000000:
|
||||
return DP_LINK_BW_10;
|
||||
case 1350000:
|
||||
return DP_LINK_BW_13_5;
|
||||
case 2000000:
|
||||
return DP_LINK_BW_20;
|
||||
default:
|
||||
/* Spec says link_bw = link_rate / 0.27Gbps */
|
||||
return link_rate / 27000;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_link_rate_to_bw_code);
|
||||
|
||||
int drm_dp_bw_code_to_link_rate(u8 link_bw)
|
||||
{
|
||||
/* Spec says link_rate = link_bw * 0.27Gbps */
|
||||
return link_bw * 27000;
|
||||
switch (link_bw) {
|
||||
case DP_LINK_BW_10:
|
||||
return 1000000;
|
||||
case DP_LINK_BW_13_5:
|
||||
return 1350000;
|
||||
case DP_LINK_BW_20:
|
||||
return 2000000;
|
||||
default:
|
||||
/* Spec says link_rate = link_bw * 0.27Gbps */
|
||||
return link_bw * 27000;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_bw_code_to_link_rate);
|
||||
|
||||
@ -590,7 +622,7 @@ static u8 drm_dp_downstream_port_count(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
|
||||
static int drm_dp_read_extended_dpcd_caps(struct drm_dp_aux *aux,
|
||||
u8 dpcd[DP_RECEIVER_CAP_SIZE])
|
||||
{
|
||||
u8 dpcd_ext[6];
|
||||
u8 dpcd_ext[DP_RECEIVER_CAP_SIZE];
|
||||
int ret;
|
||||
|
||||
/*
|
||||
|
@ -28,6 +28,7 @@
|
||||
* DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/hdmi.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/kernel.h>
|
||||
@ -49,6 +50,11 @@
|
||||
(((edid)->version > (maj)) || \
|
||||
((edid)->version == (maj) && (edid)->revision > (min)))
|
||||
|
||||
static int oui(u8 first, u8 second, u8 third)
|
||||
{
|
||||
return (first << 16) | (second << 8) | third;
|
||||
}
|
||||
|
||||
#define EDID_EST_TIMINGS 16
|
||||
#define EDID_STD_TIMINGS 8
|
||||
#define EDID_DETAILED_TIMINGS 4
|
||||
@ -4187,32 +4193,24 @@ cea_db_offsets(const u8 *cea, int *start, int *end)
|
||||
|
||||
static bool cea_db_is_hdmi_vsdb(const u8 *db)
|
||||
{
|
||||
int hdmi_id;
|
||||
|
||||
if (cea_db_tag(db) != VENDOR_BLOCK)
|
||||
return false;
|
||||
|
||||
if (cea_db_payload_len(db) < 5)
|
||||
return false;
|
||||
|
||||
hdmi_id = db[1] | (db[2] << 8) | (db[3] << 16);
|
||||
|
||||
return hdmi_id == HDMI_IEEE_OUI;
|
||||
return oui(db[3], db[2], db[1]) == HDMI_IEEE_OUI;
|
||||
}
|
||||
|
||||
static bool cea_db_is_hdmi_forum_vsdb(const u8 *db)
|
||||
{
|
||||
unsigned int oui;
|
||||
|
||||
if (cea_db_tag(db) != VENDOR_BLOCK)
|
||||
return false;
|
||||
|
||||
if (cea_db_payload_len(db) < 7)
|
||||
return false;
|
||||
|
||||
oui = db[3] << 16 | db[2] << 8 | db[1];
|
||||
|
||||
return oui == HDMI_FORUM_IEEE_OUI;
|
||||
return oui(db[3], db[2], db[1]) == HDMI_FORUM_IEEE_OUI;
|
||||
}
|
||||
|
||||
static bool cea_db_is_vcdb(const u8 *db)
|
||||
@ -5222,6 +5220,71 @@ void drm_get_monitor_range(struct drm_connector *connector,
|
||||
info->monitor_range.max_vfreq);
|
||||
}
|
||||
|
||||
static void drm_parse_vesa_mso_data(struct drm_connector *connector,
|
||||
const struct displayid_block *block)
|
||||
{
|
||||
struct displayid_vesa_vendor_specific_block *vesa =
|
||||
(struct displayid_vesa_vendor_specific_block *)block;
|
||||
struct drm_display_info *info = &connector->display_info;
|
||||
|
||||
if (block->num_bytes < 3) {
|
||||
drm_dbg_kms(connector->dev, "Unexpected vendor block size %u\n",
|
||||
block->num_bytes);
|
||||
return;
|
||||
}
|
||||
|
||||
if (oui(vesa->oui[0], vesa->oui[1], vesa->oui[2]) != VESA_IEEE_OUI)
|
||||
return;
|
||||
|
||||
if (sizeof(*vesa) != sizeof(*block) + block->num_bytes) {
|
||||
drm_dbg_kms(connector->dev, "Unexpected VESA vendor block size\n");
|
||||
return;
|
||||
}
|
||||
|
||||
switch (FIELD_GET(DISPLAYID_VESA_MSO_MODE, vesa->mso)) {
|
||||
default:
|
||||
drm_dbg_kms(connector->dev, "Reserved MSO mode value\n");
|
||||
fallthrough;
|
||||
case 0:
|
||||
info->mso_stream_count = 0;
|
||||
break;
|
||||
case 1:
|
||||
info->mso_stream_count = 2; /* 2 or 4 links */
|
||||
break;
|
||||
case 2:
|
||||
info->mso_stream_count = 4; /* 4 links */
|
||||
break;
|
||||
}
|
||||
|
||||
if (!info->mso_stream_count) {
|
||||
info->mso_pixel_overlap = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
info->mso_pixel_overlap = FIELD_GET(DISPLAYID_VESA_MSO_OVERLAP, vesa->mso);
|
||||
if (info->mso_pixel_overlap > 8) {
|
||||
drm_dbg_kms(connector->dev, "Reserved MSO pixel overlap value %u\n",
|
||||
info->mso_pixel_overlap);
|
||||
info->mso_pixel_overlap = 8;
|
||||
}
|
||||
|
||||
drm_dbg_kms(connector->dev, "MSO stream count %u, pixel overlap %u\n",
|
||||
info->mso_stream_count, info->mso_pixel_overlap);
|
||||
}
|
||||
|
||||
static void drm_update_mso(struct drm_connector *connector, const struct edid *edid)
|
||||
{
|
||||
const struct displayid_block *block;
|
||||
struct displayid_iter iter;
|
||||
|
||||
displayid_iter_edid_begin(edid, &iter);
|
||||
displayid_iter_for_each(block, &iter) {
|
||||
if (block->tag == DATA_BLOCK_2_VENDOR_SPECIFIC)
|
||||
drm_parse_vesa_mso_data(connector, block);
|
||||
}
|
||||
displayid_iter_end(&iter);
|
||||
}
|
||||
|
||||
/* A connector has no EDID information, so we've got no EDID to compute quirks from. Reset
|
||||
* all of the values which would have been set from EDID
|
||||
*/
|
||||
@ -5245,6 +5308,9 @@ drm_reset_display_info(struct drm_connector *connector)
|
||||
|
||||
info->non_desktop = 0;
|
||||
memset(&info->monitor_range, 0, sizeof(info->monitor_range));
|
||||
|
||||
info->mso_stream_count = 0;
|
||||
info->mso_pixel_overlap = 0;
|
||||
}
|
||||
|
||||
u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edid)
|
||||
@ -5323,6 +5389,9 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
|
||||
info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
|
||||
if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422)
|
||||
info->color_formats |= DRM_COLOR_FORMAT_YCRCB422;
|
||||
|
||||
drm_update_mso(connector, edid);
|
||||
|
||||
return quirks;
|
||||
}
|
||||
|
||||
|
@ -211,6 +211,8 @@ i915-y += \
|
||||
display/intel_dpio_phy.o \
|
||||
display/intel_dpll.o \
|
||||
display/intel_dpll_mgr.o \
|
||||
display/intel_dpt.o \
|
||||
display/intel_drrs.o \
|
||||
display/intel_dsb.o \
|
||||
display/intel_fb.o \
|
||||
display/intel_fbc.o \
|
||||
@ -247,6 +249,7 @@ i915-y += \
|
||||
display/g4x_dp.o \
|
||||
display/g4x_hdmi.o \
|
||||
display/icl_dsi.o \
|
||||
display/intel_backlight.o \
|
||||
display/intel_crt.o \
|
||||
display/intel_ddi.o \
|
||||
display/intel_ddi_buf_trans.o \
|
||||
|
@ -7,6 +7,7 @@
|
||||
|
||||
#include "g4x_dp.h"
|
||||
#include "intel_audio.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_connector.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
@ -16,7 +17,6 @@
|
||||
#include "intel_fifo_underrun.h"
|
||||
#include "intel_hdmi.h"
|
||||
#include "intel_hotplug.h"
|
||||
#include "intel_panel.h"
|
||||
#include "intel_pps.h"
|
||||
#include "intel_sideband.h"
|
||||
|
||||
@ -211,7 +211,7 @@ static void ilk_edp_pll_on(struct intel_dp *intel_dp,
|
||||
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
|
||||
assert_pipe_disabled(dev_priv, pipe_config->cpu_transcoder);
|
||||
assert_transcoder_disabled(dev_priv, pipe_config->cpu_transcoder);
|
||||
assert_dp_port_disabled(intel_dp);
|
||||
assert_edp_pll_disabled(dev_priv);
|
||||
|
||||
@ -251,7 +251,7 @@ static void ilk_edp_pll_off(struct intel_dp *intel_dp,
|
||||
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
|
||||
assert_pipe_disabled(dev_priv, old_crtc_state->cpu_transcoder);
|
||||
assert_transcoder_disabled(dev_priv, old_crtc_state->cpu_transcoder);
|
||||
assert_dp_port_disabled(intel_dp);
|
||||
assert_edp_pll_enabled(dev_priv);
|
||||
|
||||
@ -426,7 +426,6 @@ intel_dp_link_down(struct intel_encoder *encoder,
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
|
||||
enum port port = encoder->port;
|
||||
u32 DP = intel_dp->DP;
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm,
|
||||
(intel_de_read(dev_priv, intel_dp->output_reg) &
|
||||
@ -437,17 +436,17 @@ intel_dp_link_down(struct intel_encoder *encoder,
|
||||
|
||||
if ((IS_IVYBRIDGE(dev_priv) && port == PORT_A) ||
|
||||
(HAS_PCH_CPT(dev_priv) && port != PORT_A)) {
|
||||
DP &= ~DP_LINK_TRAIN_MASK_CPT;
|
||||
DP |= DP_LINK_TRAIN_PAT_IDLE_CPT;
|
||||
intel_dp->DP &= ~DP_LINK_TRAIN_MASK_CPT;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_PAT_IDLE_CPT;
|
||||
} else {
|
||||
DP &= ~DP_LINK_TRAIN_MASK;
|
||||
DP |= DP_LINK_TRAIN_PAT_IDLE;
|
||||
intel_dp->DP &= ~DP_LINK_TRAIN_MASK;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_PAT_IDLE;
|
||||
}
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, DP);
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, intel_dp->DP);
|
||||
intel_de_posting_read(dev_priv, intel_dp->output_reg);
|
||||
|
||||
DP &= ~(DP_PORT_EN | DP_AUDIO_OUTPUT_ENABLE);
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, DP);
|
||||
intel_dp->DP &= ~(DP_PORT_EN | DP_AUDIO_OUTPUT_ENABLE);
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, intel_dp->DP);
|
||||
intel_de_posting_read(dev_priv, intel_dp->output_reg);
|
||||
|
||||
/*
|
||||
@ -464,14 +463,14 @@ intel_dp_link_down(struct intel_encoder *encoder,
|
||||
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false);
|
||||
|
||||
/* always enable with pattern 1 (as per spec) */
|
||||
DP &= ~(DP_PIPE_SEL_MASK | DP_LINK_TRAIN_MASK);
|
||||
DP |= DP_PORT_EN | DP_PIPE_SEL(PIPE_A) |
|
||||
intel_dp->DP &= ~(DP_PIPE_SEL_MASK | DP_LINK_TRAIN_MASK);
|
||||
intel_dp->DP |= DP_PORT_EN | DP_PIPE_SEL(PIPE_A) |
|
||||
DP_LINK_TRAIN_PAT_1;
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, DP);
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, intel_dp->DP);
|
||||
intel_de_posting_read(dev_priv, intel_dp->output_reg);
|
||||
|
||||
DP &= ~DP_PORT_EN;
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, DP);
|
||||
intel_dp->DP &= ~DP_PORT_EN;
|
||||
intel_de_write(dev_priv, intel_dp->output_reg, intel_dp->DP);
|
||||
intel_de_posting_read(dev_priv, intel_dp->output_reg);
|
||||
|
||||
intel_wait_for_vblank_if_active(dev_priv, PIPE_A);
|
||||
@ -481,8 +480,6 @@ intel_dp_link_down(struct intel_encoder *encoder,
|
||||
|
||||
msleep(intel_dp->pps.panel_power_down_delay);
|
||||
|
||||
intel_dp->DP = DP;
|
||||
|
||||
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
@ -582,19 +579,18 @@ cpt_set_link_train(struct intel_dp *intel_dp,
|
||||
u8 dp_train_pat)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
u32 *DP = &intel_dp->DP;
|
||||
|
||||
*DP &= ~DP_LINK_TRAIN_MASK_CPT;
|
||||
intel_dp->DP &= ~DP_LINK_TRAIN_MASK_CPT;
|
||||
|
||||
switch (intel_dp_training_pattern_symbol(dp_train_pat)) {
|
||||
case DP_TRAINING_PATTERN_DISABLE:
|
||||
*DP |= DP_LINK_TRAIN_OFF_CPT;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT;
|
||||
break;
|
||||
case DP_TRAINING_PATTERN_1:
|
||||
*DP |= DP_LINK_TRAIN_PAT_1_CPT;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_PAT_1_CPT;
|
||||
break;
|
||||
case DP_TRAINING_PATTERN_2:
|
||||
*DP |= DP_LINK_TRAIN_PAT_2_CPT;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_PAT_2_CPT;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(intel_dp_training_pattern_symbol(dp_train_pat));
|
||||
@ -611,19 +607,18 @@ g4x_set_link_train(struct intel_dp *intel_dp,
|
||||
u8 dp_train_pat)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
u32 *DP = &intel_dp->DP;
|
||||
|
||||
*DP &= ~DP_LINK_TRAIN_MASK;
|
||||
intel_dp->DP &= ~DP_LINK_TRAIN_MASK;
|
||||
|
||||
switch (intel_dp_training_pattern_symbol(dp_train_pat)) {
|
||||
case DP_TRAINING_PATTERN_DISABLE:
|
||||
*DP |= DP_LINK_TRAIN_OFF;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_OFF;
|
||||
break;
|
||||
case DP_TRAINING_PATTERN_1:
|
||||
*DP |= DP_LINK_TRAIN_PAT_1;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_PAT_1;
|
||||
break;
|
||||
case DP_TRAINING_PATTERN_2:
|
||||
*DP |= DP_LINK_TRAIN_PAT_2;
|
||||
intel_dp->DP |= DP_LINK_TRAIN_PAT_2;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(intel_dp_training_pattern_symbol(dp_train_pat));
|
||||
@ -813,10 +808,10 @@ static u8 intel_dp_preemph_max_3(struct intel_dp *intel_dp)
|
||||
return DP_TRAIN_PRE_EMPH_LEVEL_3;
|
||||
}
|
||||
|
||||
static void vlv_set_signal_levels(struct intel_dp *intel_dp,
|
||||
static void vlv_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
unsigned long demph_reg_value, preemph_reg_value,
|
||||
uniqtranscale_reg_value;
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
@ -899,10 +894,10 @@ static void vlv_set_signal_levels(struct intel_dp *intel_dp,
|
||||
uniqtranscale_reg_value, 0);
|
||||
}
|
||||
|
||||
static void chv_set_signal_levels(struct intel_dp *intel_dp,
|
||||
static void chv_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
u32 deemph_reg_value, margin_reg_value;
|
||||
bool uniq_trans_scale = false;
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
@ -1020,10 +1015,11 @@ static u32 g4x_signal_levels(u8 train_set)
|
||||
}
|
||||
|
||||
static void
|
||||
g4x_set_signal_levels(struct intel_dp *intel_dp,
|
||||
g4x_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
u32 signal_levels;
|
||||
|
||||
@ -1067,10 +1063,11 @@ static u32 snb_cpu_edp_signal_levels(u8 train_set)
|
||||
}
|
||||
|
||||
static void
|
||||
snb_cpu_edp_set_signal_levels(struct intel_dp *intel_dp,
|
||||
snb_cpu_edp_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
u32 signal_levels;
|
||||
|
||||
@ -1118,10 +1115,11 @@ static u32 ivb_cpu_edp_signal_levels(u8 train_set)
|
||||
}
|
||||
|
||||
static void
|
||||
ivb_cpu_edp_set_signal_levels(struct intel_dp *intel_dp,
|
||||
ivb_cpu_edp_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
u32 signal_levels;
|
||||
|
||||
@ -1334,7 +1332,7 @@ bool g4x_dp_init(struct drm_i915_private *dev_priv,
|
||||
intel_encoder->get_config = intel_dp_get_config;
|
||||
intel_encoder->sync_state = intel_dp_sync_state;
|
||||
intel_encoder->initial_fastset_check = intel_dp_initial_fastset_check;
|
||||
intel_encoder->update_pipe = intel_panel_update_backlight;
|
||||
intel_encoder->update_pipe = intel_backlight_update;
|
||||
intel_encoder->suspend = intel_dp_encoder_suspend;
|
||||
intel_encoder->shutdown = intel_dp_encoder_shutdown;
|
||||
if (IS_CHERRYVIEW(dev_priv)) {
|
||||
@ -1364,15 +1362,15 @@ bool g4x_dp_init(struct drm_i915_private *dev_priv,
|
||||
dig_port->dp.set_link_train = g4x_set_link_train;
|
||||
|
||||
if (IS_CHERRYVIEW(dev_priv))
|
||||
dig_port->dp.set_signal_levels = chv_set_signal_levels;
|
||||
intel_encoder->set_signal_levels = chv_set_signal_levels;
|
||||
else if (IS_VALLEYVIEW(dev_priv))
|
||||
dig_port->dp.set_signal_levels = vlv_set_signal_levels;
|
||||
intel_encoder->set_signal_levels = vlv_set_signal_levels;
|
||||
else if (IS_IVYBRIDGE(dev_priv) && port == PORT_A)
|
||||
dig_port->dp.set_signal_levels = ivb_cpu_edp_set_signal_levels;
|
||||
intel_encoder->set_signal_levels = ivb_cpu_edp_set_signal_levels;
|
||||
else if (IS_SANDYBRIDGE(dev_priv) && port == PORT_A)
|
||||
dig_port->dp.set_signal_levels = snb_cpu_edp_set_signal_levels;
|
||||
intel_encoder->set_signal_levels = snb_cpu_edp_set_signal_levels;
|
||||
else
|
||||
dig_port->dp.set_signal_levels = g4x_set_signal_levels;
|
||||
intel_encoder->set_signal_levels = g4x_set_signal_levels;
|
||||
|
||||
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv) ||
|
||||
(HAS_PCH_SPLIT(dev_priv) && port != PORT_A)) {
|
||||
|
@ -29,6 +29,7 @@
|
||||
#include <drm/drm_mipi_dsi.h>
|
||||
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_combo_phy.h"
|
||||
#include "intel_connector.h"
|
||||
#include "intel_crtc.h"
|
||||
@ -54,20 +55,28 @@ static int payload_credits_available(struct drm_i915_private *dev_priv,
|
||||
>> FREE_PLOAD_CREDIT_SHIFT;
|
||||
}
|
||||
|
||||
static void wait_for_header_credits(struct drm_i915_private *dev_priv,
|
||||
enum transcoder dsi_trans)
|
||||
static bool wait_for_header_credits(struct drm_i915_private *dev_priv,
|
||||
enum transcoder dsi_trans, int hdr_credit)
|
||||
{
|
||||
if (wait_for_us(header_credits_available(dev_priv, dsi_trans) >=
|
||||
MAX_HEADER_CREDIT, 100))
|
||||
hdr_credit, 100)) {
|
||||
drm_err(&dev_priv->drm, "DSI header credits not released\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void wait_for_payload_credits(struct drm_i915_private *dev_priv,
|
||||
enum transcoder dsi_trans)
|
||||
static bool wait_for_payload_credits(struct drm_i915_private *dev_priv,
|
||||
enum transcoder dsi_trans, int payld_credit)
|
||||
{
|
||||
if (wait_for_us(payload_credits_available(dev_priv, dsi_trans) >=
|
||||
MAX_PLOAD_CREDIT, 100))
|
||||
payld_credit, 100)) {
|
||||
drm_err(&dev_priv->drm, "DSI payload credits not released\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static enum transcoder dsi_port_to_transcoder(enum port port)
|
||||
@ -90,8 +99,8 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
|
||||
/* wait for header/payload credits to be released */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
dsi_trans = dsi_port_to_transcoder(port);
|
||||
wait_for_header_credits(dev_priv, dsi_trans);
|
||||
wait_for_payload_credits(dev_priv, dsi_trans);
|
||||
wait_for_header_credits(dev_priv, dsi_trans, MAX_HEADER_CREDIT);
|
||||
wait_for_payload_credits(dev_priv, dsi_trans, MAX_PLOAD_CREDIT);
|
||||
}
|
||||
|
||||
/* send nop DCS command */
|
||||
@ -108,7 +117,7 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
|
||||
/* wait for header credits to be released */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
dsi_trans = dsi_port_to_transcoder(port);
|
||||
wait_for_header_credits(dev_priv, dsi_trans);
|
||||
wait_for_header_credits(dev_priv, dsi_trans, MAX_HEADER_CREDIT);
|
||||
}
|
||||
|
||||
/* wait for LP TX in progress bit to be cleared */
|
||||
@ -120,54 +129,52 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
|
||||
}
|
||||
}
|
||||
|
||||
static bool add_payld_to_queue(struct intel_dsi_host *host, const u8 *data,
|
||||
u32 len)
|
||||
static int dsi_send_pkt_payld(struct intel_dsi_host *host,
|
||||
const struct mipi_dsi_packet *packet)
|
||||
{
|
||||
struct intel_dsi *intel_dsi = host->intel_dsi;
|
||||
struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
|
||||
struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
|
||||
enum transcoder dsi_trans = dsi_port_to_transcoder(host->port);
|
||||
int free_credits;
|
||||
const u8 *data = packet->payload;
|
||||
u32 len = packet->payload_length;
|
||||
int i, j;
|
||||
|
||||
/* payload queue can accept *256 bytes*, check limit */
|
||||
if (len > MAX_PLOAD_CREDIT * 4) {
|
||||
drm_err(&i915->drm, "payload size exceeds max queue limit\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
for (i = 0; i < len; i += 4) {
|
||||
u32 tmp = 0;
|
||||
|
||||
free_credits = payload_credits_available(dev_priv, dsi_trans);
|
||||
if (free_credits < 1) {
|
||||
drm_err(&dev_priv->drm,
|
||||
"Payload credit not available\n");
|
||||
return false;
|
||||
}
|
||||
if (!wait_for_payload_credits(i915, dsi_trans, 1))
|
||||
return -EBUSY;
|
||||
|
||||
for (j = 0; j < min_t(u32, len - i, 4); j++)
|
||||
tmp |= *data++ << 8 * j;
|
||||
|
||||
intel_de_write(dev_priv, DSI_CMD_TXPYLD(dsi_trans), tmp);
|
||||
intel_de_write(i915, DSI_CMD_TXPYLD(dsi_trans), tmp);
|
||||
}
|
||||
|
||||
return true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dsi_send_pkt_hdr(struct intel_dsi_host *host,
|
||||
struct mipi_dsi_packet pkt, bool enable_lpdt)
|
||||
const struct mipi_dsi_packet *packet,
|
||||
bool enable_lpdt)
|
||||
{
|
||||
struct intel_dsi *intel_dsi = host->intel_dsi;
|
||||
struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
|
||||
enum transcoder dsi_trans = dsi_port_to_transcoder(host->port);
|
||||
u32 tmp;
|
||||
int free_credits;
|
||||
|
||||
/* check if header credit available */
|
||||
free_credits = header_credits_available(dev_priv, dsi_trans);
|
||||
if (free_credits < 1) {
|
||||
drm_err(&dev_priv->drm,
|
||||
"send pkt header failed, not enough hdr credits\n");
|
||||
return -1;
|
||||
}
|
||||
if (!wait_for_header_credits(dev_priv, dsi_trans, 1))
|
||||
return -EBUSY;
|
||||
|
||||
tmp = intel_de_read(dev_priv, DSI_CMD_TXHDR(dsi_trans));
|
||||
|
||||
if (pkt.payload)
|
||||
if (packet->payload)
|
||||
tmp |= PAYLOAD_PRESENT;
|
||||
else
|
||||
tmp &= ~PAYLOAD_PRESENT;
|
||||
@ -178,37 +185,15 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host,
|
||||
tmp |= LP_DATA_TRANSFER;
|
||||
|
||||
tmp &= ~(PARAM_WC_MASK | VC_MASK | DT_MASK);
|
||||
tmp |= ((pkt.header[0] & VC_MASK) << VC_SHIFT);
|
||||
tmp |= ((pkt.header[0] & DT_MASK) << DT_SHIFT);
|
||||
tmp |= (pkt.header[1] << PARAM_WC_LOWER_SHIFT);
|
||||
tmp |= (pkt.header[2] << PARAM_WC_UPPER_SHIFT);
|
||||
tmp |= ((packet->header[0] & VC_MASK) << VC_SHIFT);
|
||||
tmp |= ((packet->header[0] & DT_MASK) << DT_SHIFT);
|
||||
tmp |= (packet->header[1] << PARAM_WC_LOWER_SHIFT);
|
||||
tmp |= (packet->header[2] << PARAM_WC_UPPER_SHIFT);
|
||||
intel_de_write(dev_priv, DSI_CMD_TXHDR(dsi_trans), tmp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dsi_send_pkt_payld(struct intel_dsi_host *host,
|
||||
struct mipi_dsi_packet pkt)
|
||||
{
|
||||
struct intel_dsi *intel_dsi = host->intel_dsi;
|
||||
struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
|
||||
|
||||
/* payload queue can accept *256 bytes*, check limit */
|
||||
if (pkt.payload_length > MAX_PLOAD_CREDIT * 4) {
|
||||
drm_err(&i915->drm, "payload size exceeds max queue limit\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* load data into command payload queue */
|
||||
if (!add_payld_to_queue(host, pkt.payload,
|
||||
pkt.payload_length)) {
|
||||
drm_err(&i915->drm, "adding payload to queue failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void icl_dsi_frame_update(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
@ -1270,6 +1255,26 @@ static void icl_apply_kvmr_pipe_a_wa(struct intel_encoder *encoder,
|
||||
IGNORE_KVMR_PIPE_A,
|
||||
enable ? IGNORE_KVMR_PIPE_A : 0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Wa_16012360555:adl-p
|
||||
* SW will have to program the "LP to HS Wakeup Guardband"
|
||||
* to account for the repeaters on the HS Request/Ready
|
||||
* PPI signaling between the Display engine and the DPHY.
|
||||
*/
|
||||
static void adlp_set_lp_hs_wakeup_gb(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
|
||||
enum port port;
|
||||
|
||||
if (DISPLAY_VER(i915) == 13) {
|
||||
for_each_dsi_port(port, intel_dsi->ports)
|
||||
intel_de_rmw(i915, TGL_DSI_CHKN_REG(port),
|
||||
TGL_DSI_CHKN_LSHS_GB, 0x4);
|
||||
}
|
||||
}
|
||||
|
||||
static void gen11_dsi_enable(struct intel_atomic_state *state,
|
||||
struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
@ -1283,11 +1288,14 @@ static void gen11_dsi_enable(struct intel_atomic_state *state,
|
||||
/* Wa_1409054076:icl,jsl,ehl */
|
||||
icl_apply_kvmr_pipe_a_wa(encoder, crtc->pipe, true);
|
||||
|
||||
/* Wa_16012360555:adl-p */
|
||||
adlp_set_lp_hs_wakeup_gb(encoder);
|
||||
|
||||
/* step6d: enable dsi transcoder */
|
||||
gen11_dsi_enable_transcoder(encoder);
|
||||
|
||||
/* step7: enable backlight */
|
||||
intel_panel_enable_backlight(crtc_state, conn_state);
|
||||
intel_backlight_enable(crtc_state, conn_state);
|
||||
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON);
|
||||
|
||||
intel_crtc_vblank_on(crtc_state);
|
||||
@ -1440,7 +1448,7 @@ static void gen11_dsi_disable(struct intel_atomic_state *state,
|
||||
|
||||
/* step1: turn off backlight */
|
||||
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_OFF);
|
||||
intel_panel_disable_backlight(old_conn_state);
|
||||
intel_backlight_disable(old_conn_state);
|
||||
|
||||
/* step2d,e: disable transcoder and wait */
|
||||
gen11_dsi_disable_transcoder(encoder);
|
||||
@ -1577,8 +1585,14 @@ static void gen11_dsi_sync_state(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
enum pipe pipe = intel_crtc->pipe;
|
||||
struct intel_crtc *intel_crtc;
|
||||
enum pipe pipe;
|
||||
|
||||
if (!crtc_state)
|
||||
return;
|
||||
|
||||
intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
pipe = intel_crtc->pipe;
|
||||
|
||||
/* wa verify 1409054076:icl,jsl,ehl */
|
||||
if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B &&
|
||||
@ -1644,16 +1658,17 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
|
||||
struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi,
|
||||
base);
|
||||
struct intel_connector *intel_connector = intel_dsi->attached_connector;
|
||||
const struct drm_display_mode *fixed_mode =
|
||||
intel_connector->panel.fixed_mode;
|
||||
struct drm_display_mode *adjusted_mode =
|
||||
&pipe_config->hw.adjusted_mode;
|
||||
int ret;
|
||||
|
||||
pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB;
|
||||
intel_fixed_panel_mode(fixed_mode, adjusted_mode);
|
||||
|
||||
ret = intel_pch_panel_fitting(pipe_config, conn_state);
|
||||
ret = intel_panel_compute_config(intel_connector, adjusted_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_panel_fitting(pipe_config, conn_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -1809,18 +1824,18 @@ static ssize_t gen11_dsi_host_transfer(struct mipi_dsi_host *host,
|
||||
if (msg->flags & MIPI_DSI_MSG_USE_LPM)
|
||||
enable_lpdt = true;
|
||||
|
||||
/* send packet header */
|
||||
ret = dsi_send_pkt_hdr(intel_dsi_host, dsi_pkt, enable_lpdt);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* only long packet contains payload */
|
||||
if (mipi_dsi_packet_format_is_long(msg->type)) {
|
||||
ret = dsi_send_pkt_payld(intel_dsi_host, dsi_pkt);
|
||||
ret = dsi_send_pkt_payld(intel_dsi_host, &dsi_pkt);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* send packet header */
|
||||
ret = dsi_send_pkt_hdr(intel_dsi_host, &dsi_pkt, enable_lpdt);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
//TODO: add payload receive code if needed
|
||||
|
||||
ret = sizeof(dsi_pkt.header) + dsi_pkt.payload_length;
|
||||
@ -2008,7 +2023,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
|
||||
encoder->port = port;
|
||||
encoder->get_config = gen11_dsi_get_config;
|
||||
encoder->sync_state = gen11_dsi_sync_state;
|
||||
encoder->update_pipe = intel_panel_update_backlight;
|
||||
encoder->update_pipe = intel_backlight_update;
|
||||
encoder->compute_config = gen11_dsi_compute_config;
|
||||
encoder->get_hw_state = gen11_dsi_get_hw_state;
|
||||
encoder->initial_fastset_check = gen11_dsi_initial_fastset_check;
|
||||
@ -2042,7 +2057,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
|
||||
}
|
||||
|
||||
intel_panel_init(&intel_connector->panel, fixed_mode, NULL);
|
||||
intel_panel_setup_backlight(connector, INVALID_PIPE);
|
||||
intel_backlight_setup(intel_connector, INVALID_PIPE);
|
||||
|
||||
if (dev_priv->vbt.dsi.config->dual_link)
|
||||
intel_dsi->ports = BIT(PORT_A) | BIT(PORT_B);
|
||||
|
@ -282,3 +282,49 @@ void intel_acpi_device_id_update(struct drm_i915_private *dev_priv)
|
||||
}
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
}
|
||||
|
||||
/* NOTE: The connector order must be final before this is called. */
|
||||
void intel_acpi_assign_connector_fwnodes(struct drm_i915_private *i915)
|
||||
{
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
struct drm_device *drm_dev = &i915->drm;
|
||||
struct fwnode_handle *fwnode = NULL;
|
||||
struct drm_connector *connector;
|
||||
struct acpi_device *adev;
|
||||
|
||||
drm_connector_list_iter_begin(drm_dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
/* Always getting the next, even when the last was not used. */
|
||||
fwnode = device_get_next_child_node(drm_dev->dev, fwnode);
|
||||
if (!fwnode)
|
||||
break;
|
||||
|
||||
switch (connector->connector_type) {
|
||||
case DRM_MODE_CONNECTOR_LVDS:
|
||||
case DRM_MODE_CONNECTOR_eDP:
|
||||
case DRM_MODE_CONNECTOR_DSI:
|
||||
/*
|
||||
* Integrated displays have a specific address 0x1f on
|
||||
* most Intel platforms, but not on all of them.
|
||||
*/
|
||||
adev = acpi_find_child_device(ACPI_COMPANION(drm_dev->dev),
|
||||
0x1f, 0);
|
||||
if (adev) {
|
||||
connector->fwnode =
|
||||
fwnode_handle_get(acpi_fwnode_handle(adev));
|
||||
break;
|
||||
}
|
||||
fallthrough;
|
||||
default:
|
||||
connector->fwnode = fwnode_handle_get(fwnode);
|
||||
break;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
/*
|
||||
* device_get_next_child_node() takes a reference on the fwnode, if
|
||||
* we stopped iterating because we are out of connectors we need to
|
||||
* put this, otherwise fwnode is NULL and the put is a no-op.
|
||||
*/
|
||||
fwnode_handle_put(fwnode);
|
||||
}
|
||||
|
@ -13,6 +13,7 @@ void intel_register_dsm_handler(void);
|
||||
void intel_unregister_dsm_handler(void);
|
||||
void intel_dsm_get_bios_data_funcs_supported(struct drm_i915_private *i915);
|
||||
void intel_acpi_device_id_update(struct drm_i915_private *i915);
|
||||
void intel_acpi_assign_connector_fwnodes(struct drm_i915_private *i915);
|
||||
#else
|
||||
static inline void intel_register_dsm_handler(void) { return; }
|
||||
static inline void intel_unregister_dsm_handler(void) { return; }
|
||||
@ -20,6 +21,8 @@ static inline
|
||||
void intel_dsm_get_bios_data_funcs_supported(struct drm_i915_private *i915) { return; }
|
||||
static inline
|
||||
void intel_acpi_device_id_update(struct drm_i915_private *i915) { return; }
|
||||
static inline
|
||||
void intel_acpi_assign_connector_fwnodes(struct drm_i915_private *i915) { return; }
|
||||
#endif /* CONFIG_ACPI */
|
||||
|
||||
#endif /* __INTEL_ACPI_H__ */
|
||||
|
@ -848,10 +848,10 @@ void intel_audio_codec_enable(struct intel_encoder *encoder,
|
||||
|
||||
connector->eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2;
|
||||
|
||||
if (dev_priv->display.audio_codec_enable)
|
||||
dev_priv->display.audio_codec_enable(encoder,
|
||||
crtc_state,
|
||||
conn_state);
|
||||
if (dev_priv->audio_funcs)
|
||||
dev_priv->audio_funcs->audio_codec_enable(encoder,
|
||||
crtc_state,
|
||||
conn_state);
|
||||
|
||||
mutex_lock(&dev_priv->av_mutex);
|
||||
encoder->audio_connector = connector;
|
||||
@ -893,10 +893,10 @@ void intel_audio_codec_disable(struct intel_encoder *encoder,
|
||||
enum port port = encoder->port;
|
||||
enum pipe pipe = crtc->pipe;
|
||||
|
||||
if (dev_priv->display.audio_codec_disable)
|
||||
dev_priv->display.audio_codec_disable(encoder,
|
||||
old_crtc_state,
|
||||
old_conn_state);
|
||||
if (dev_priv->audio_funcs)
|
||||
dev_priv->audio_funcs->audio_codec_disable(encoder,
|
||||
old_crtc_state,
|
||||
old_conn_state);
|
||||
|
||||
mutex_lock(&dev_priv->av_mutex);
|
||||
encoder->audio_connector = NULL;
|
||||
@ -915,6 +915,21 @@ void intel_audio_codec_disable(struct intel_encoder *encoder,
|
||||
intel_lpe_audio_notify(dev_priv, pipe, port, NULL, 0, false);
|
||||
}
|
||||
|
||||
static const struct intel_audio_funcs g4x_audio_funcs = {
|
||||
.audio_codec_enable = g4x_audio_codec_enable,
|
||||
.audio_codec_disable = g4x_audio_codec_disable,
|
||||
};
|
||||
|
||||
static const struct intel_audio_funcs ilk_audio_funcs = {
|
||||
.audio_codec_enable = ilk_audio_codec_enable,
|
||||
.audio_codec_disable = ilk_audio_codec_disable,
|
||||
};
|
||||
|
||||
static const struct intel_audio_funcs hsw_audio_funcs = {
|
||||
.audio_codec_enable = hsw_audio_codec_enable,
|
||||
.audio_codec_disable = hsw_audio_codec_disable,
|
||||
};
|
||||
|
||||
/**
|
||||
* intel_init_audio_hooks - Set up chip specific audio hooks
|
||||
* @dev_priv: device private
|
||||
@ -922,17 +937,13 @@ void intel_audio_codec_disable(struct intel_encoder *encoder,
|
||||
void intel_init_audio_hooks(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (IS_G4X(dev_priv)) {
|
||||
dev_priv->display.audio_codec_enable = g4x_audio_codec_enable;
|
||||
dev_priv->display.audio_codec_disable = g4x_audio_codec_disable;
|
||||
dev_priv->audio_funcs = &g4x_audio_funcs;
|
||||
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
|
||||
dev_priv->display.audio_codec_enable = ilk_audio_codec_enable;
|
||||
dev_priv->display.audio_codec_disable = ilk_audio_codec_disable;
|
||||
dev_priv->audio_funcs = &ilk_audio_funcs;
|
||||
} else if (IS_HASWELL(dev_priv) || DISPLAY_VER(dev_priv) >= 8) {
|
||||
dev_priv->display.audio_codec_enable = hsw_audio_codec_enable;
|
||||
dev_priv->display.audio_codec_disable = hsw_audio_codec_disable;
|
||||
dev_priv->audio_funcs = &hsw_audio_funcs;
|
||||
} else if (HAS_PCH_SPLIT(dev_priv)) {
|
||||
dev_priv->display.audio_codec_enable = ilk_audio_codec_enable;
|
||||
dev_priv->display.audio_codec_disable = ilk_audio_codec_disable;
|
||||
dev_priv->audio_funcs = &ilk_audio_funcs;
|
||||
}
|
||||
}
|
||||
|
||||
@ -1308,8 +1319,9 @@ static void i915_audio_component_init(struct drm_i915_private *dev_priv)
|
||||
else
|
||||
aud_freq = aud_freq_init;
|
||||
|
||||
/* use BIOS provided value for TGL unless it is a known bad value */
|
||||
if (IS_TIGERLAKE(dev_priv) && aud_freq_init != AUD_FREQ_TGL_BROKEN)
|
||||
/* use BIOS provided value for TGL and RKL unless it is a known bad value */
|
||||
if ((IS_TIGERLAKE(dev_priv) || IS_ROCKETLAKE(dev_priv)) &&
|
||||
aud_freq_init != AUD_FREQ_TGL_BROKEN)
|
||||
aud_freq = aud_freq_init;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "use AUD_FREQ_CNTRL of 0x%x (init value 0x%x)\n",
|
||||
|
1776
drivers/gpu/drm/i915/display/intel_backlight.c
Normal file
1776
drivers/gpu/drm/i915/display/intel_backlight.c
Normal file
File diff suppressed because it is too large
Load Diff
52
drivers/gpu/drm/i915/display/intel_backlight.h
Normal file
52
drivers/gpu/drm/i915/display/intel_backlight.h
Normal file
@ -0,0 +1,52 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_BACKLIGHT_H__
|
||||
#define __INTEL_BACKLIGHT_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_connector_state;
|
||||
struct intel_atomic_state;
|
||||
struct intel_connector;
|
||||
struct intel_crtc_state;
|
||||
struct intel_encoder;
|
||||
struct intel_panel;
|
||||
enum pipe;
|
||||
|
||||
void intel_backlight_init_funcs(struct intel_panel *panel);
|
||||
int intel_backlight_setup(struct intel_connector *connector, enum pipe pipe);
|
||||
void intel_backlight_destroy(struct intel_panel *panel);
|
||||
|
||||
void intel_backlight_enable(const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void intel_backlight_update(struct intel_atomic_state *state,
|
||||
struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void intel_backlight_disable(const struct drm_connector_state *old_conn_state);
|
||||
|
||||
void intel_backlight_set_acpi(const struct drm_connector_state *conn_state,
|
||||
u32 level, u32 max);
|
||||
void intel_backlight_set_pwm_level(const struct drm_connector_state *conn_state,
|
||||
u32 level);
|
||||
u32 intel_backlight_invert_pwm_level(struct intel_connector *connector, u32 level);
|
||||
u32 intel_backlight_level_to_pwm(struct intel_connector *connector, u32 level);
|
||||
u32 intel_backlight_level_from_pwm(struct intel_connector *connector, u32 val);
|
||||
|
||||
#if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE)
|
||||
int intel_backlight_device_register(struct intel_connector *connector);
|
||||
void intel_backlight_device_unregister(struct intel_connector *connector);
|
||||
#else /* CONFIG_BACKLIGHT_CLASS_DEVICE */
|
||||
static inline int intel_backlight_device_register(struct intel_connector *connector)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void intel_backlight_device_unregister(struct intel_connector *connector)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_BACKLIGHT_CLASS_DEVICE */
|
||||
|
||||
#endif /* __INTEL_BACKLIGHT_H__ */
|
@ -451,13 +451,23 @@ parse_lfp_backlight(struct drm_i915_private *i915,
|
||||
}
|
||||
|
||||
i915->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
|
||||
if (bdb->version >= 191 &&
|
||||
get_blocksize(backlight_data) >= sizeof(*backlight_data)) {
|
||||
const struct lfp_backlight_control_method *method;
|
||||
if (bdb->version >= 191) {
|
||||
size_t exp_size;
|
||||
|
||||
method = &backlight_data->backlight_control[panel_type];
|
||||
i915->vbt.backlight.type = method->type;
|
||||
i915->vbt.backlight.controller = method->controller;
|
||||
if (bdb->version >= 236)
|
||||
exp_size = sizeof(struct bdb_lfp_backlight_data);
|
||||
else if (bdb->version >= 234)
|
||||
exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_234;
|
||||
else
|
||||
exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_191;
|
||||
|
||||
if (get_blocksize(backlight_data) >= exp_size) {
|
||||
const struct lfp_backlight_control_method *method;
|
||||
|
||||
method = &backlight_data->backlight_control[panel_type];
|
||||
i915->vbt.backlight.type = method->type;
|
||||
i915->vbt.backlight.controller = method->controller;
|
||||
}
|
||||
}
|
||||
|
||||
i915->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
|
||||
@ -483,6 +493,9 @@ parse_lfp_backlight(struct drm_i915_private *i915,
|
||||
level = 255;
|
||||
}
|
||||
i915->vbt.backlight.min_brightness = min_level;
|
||||
|
||||
i915->vbt.backlight.brightness_precision_bits =
|
||||
backlight_data->brightness_precision_bits[panel_type];
|
||||
} else {
|
||||
level = backlight_data->level[panel_type];
|
||||
i915->vbt.backlight.min_brightness = entry->min_brightness;
|
||||
@ -1501,110 +1514,6 @@ static u8 translate_iboost(u8 val)
|
||||
return mapping[val];
|
||||
}
|
||||
|
||||
static enum port get_port_by_ddc_pin(struct drm_i915_private *i915, u8 ddc_pin)
|
||||
{
|
||||
const struct ddi_vbt_port_info *info;
|
||||
enum port port;
|
||||
|
||||
if (!ddc_pin)
|
||||
return PORT_NONE;
|
||||
|
||||
for_each_port(port) {
|
||||
info = &i915->vbt.ddi_port_info[port];
|
||||
|
||||
if (info->devdata && ddc_pin == info->alternate_ddc_pin)
|
||||
return port;
|
||||
}
|
||||
|
||||
return PORT_NONE;
|
||||
}
|
||||
|
||||
static void sanitize_ddc_pin(struct drm_i915_private *i915,
|
||||
enum port port)
|
||||
{
|
||||
struct ddi_vbt_port_info *info = &i915->vbt.ddi_port_info[port];
|
||||
struct child_device_config *child;
|
||||
enum port p;
|
||||
|
||||
p = get_port_by_ddc_pin(i915, info->alternate_ddc_pin);
|
||||
if (p == PORT_NONE)
|
||||
return;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"port %c trying to use the same DDC pin (0x%x) as port %c, "
|
||||
"disabling port %c DVI/HDMI support\n",
|
||||
port_name(port), info->alternate_ddc_pin,
|
||||
port_name(p), port_name(p));
|
||||
|
||||
/*
|
||||
* If we have multiple ports supposedly sharing the pin, then dvi/hdmi
|
||||
* couldn't exist on the shared port. Otherwise they share the same ddc
|
||||
* pin and system couldn't communicate with them separately.
|
||||
*
|
||||
* Give inverse child device order the priority, last one wins. Yes,
|
||||
* there are real machines (eg. Asrock B250M-HDV) where VBT has both
|
||||
* port A and port E with the same AUX ch and we must pick port E :(
|
||||
*/
|
||||
info = &i915->vbt.ddi_port_info[p];
|
||||
child = &info->devdata->child;
|
||||
|
||||
child->device_type &= ~DEVICE_TYPE_TMDS_DVI_SIGNALING;
|
||||
child->device_type |= DEVICE_TYPE_NOT_HDMI_OUTPUT;
|
||||
|
||||
info->alternate_ddc_pin = 0;
|
||||
}
|
||||
|
||||
static enum port get_port_by_aux_ch(struct drm_i915_private *i915, u8 aux_ch)
|
||||
{
|
||||
const struct ddi_vbt_port_info *info;
|
||||
enum port port;
|
||||
|
||||
if (!aux_ch)
|
||||
return PORT_NONE;
|
||||
|
||||
for_each_port(port) {
|
||||
info = &i915->vbt.ddi_port_info[port];
|
||||
|
||||
if (info->devdata && aux_ch == info->alternate_aux_channel)
|
||||
return port;
|
||||
}
|
||||
|
||||
return PORT_NONE;
|
||||
}
|
||||
|
||||
static void sanitize_aux_ch(struct drm_i915_private *i915,
|
||||
enum port port)
|
||||
{
|
||||
struct ddi_vbt_port_info *info = &i915->vbt.ddi_port_info[port];
|
||||
struct child_device_config *child;
|
||||
enum port p;
|
||||
|
||||
p = get_port_by_aux_ch(i915, info->alternate_aux_channel);
|
||||
if (p == PORT_NONE)
|
||||
return;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"port %c trying to use the same AUX CH (0x%x) as port %c, "
|
||||
"disabling port %c DP support\n",
|
||||
port_name(port), info->alternate_aux_channel,
|
||||
port_name(p), port_name(p));
|
||||
|
||||
/*
|
||||
* If we have multiple ports supposedly sharing the aux channel, then DP
|
||||
* couldn't exist on the shared port. Otherwise they share the same aux
|
||||
* channel and system couldn't communicate with them separately.
|
||||
*
|
||||
* Give inverse child device order the priority, last one wins. Yes,
|
||||
* there are real machines (eg. Asrock B250M-HDV) where VBT has both
|
||||
* port A and port E with the same AUX ch and we must pick port E :(
|
||||
*/
|
||||
info = &i915->vbt.ddi_port_info[p];
|
||||
child = &info->devdata->child;
|
||||
|
||||
child->device_type &= ~DEVICE_TYPE_DISPLAYPORT_OUTPUT;
|
||||
info->alternate_aux_channel = 0;
|
||||
}
|
||||
|
||||
static const u8 cnp_ddc_pin_map[] = {
|
||||
[0] = 0, /* N/A */
|
||||
[DDC_BUS_DDI_B] = GMBUS_PIN_1_BXT,
|
||||
@ -1682,6 +1591,122 @@ static u8 map_ddc_pin(struct drm_i915_private *i915, u8 vbt_pin)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static enum port get_port_by_ddc_pin(struct drm_i915_private *i915, u8 ddc_pin)
|
||||
{
|
||||
const struct intel_bios_encoder_data *devdata;
|
||||
enum port port;
|
||||
|
||||
if (!ddc_pin)
|
||||
return PORT_NONE;
|
||||
|
||||
for_each_port(port) {
|
||||
devdata = i915->vbt.ports[port];
|
||||
|
||||
if (devdata && ddc_pin == devdata->child.ddc_pin)
|
||||
return port;
|
||||
}
|
||||
|
||||
return PORT_NONE;
|
||||
}
|
||||
|
||||
static void sanitize_ddc_pin(struct intel_bios_encoder_data *devdata,
|
||||
enum port port)
|
||||
{
|
||||
struct drm_i915_private *i915 = devdata->i915;
|
||||
struct child_device_config *child;
|
||||
u8 mapped_ddc_pin;
|
||||
enum port p;
|
||||
|
||||
if (!devdata->child.ddc_pin)
|
||||
return;
|
||||
|
||||
mapped_ddc_pin = map_ddc_pin(i915, devdata->child.ddc_pin);
|
||||
if (!intel_gmbus_is_valid_pin(i915, mapped_ddc_pin)) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %c has invalid DDC pin %d, "
|
||||
"sticking to defaults\n",
|
||||
port_name(port), mapped_ddc_pin);
|
||||
devdata->child.ddc_pin = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
p = get_port_by_ddc_pin(i915, devdata->child.ddc_pin);
|
||||
if (p == PORT_NONE)
|
||||
return;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"port %c trying to use the same DDC pin (0x%x) as port %c, "
|
||||
"disabling port %c DVI/HDMI support\n",
|
||||
port_name(port), mapped_ddc_pin,
|
||||
port_name(p), port_name(p));
|
||||
|
||||
/*
|
||||
* If we have multiple ports supposedly sharing the pin, then dvi/hdmi
|
||||
* couldn't exist on the shared port. Otherwise they share the same ddc
|
||||
* pin and system couldn't communicate with them separately.
|
||||
*
|
||||
* Give inverse child device order the priority, last one wins. Yes,
|
||||
* there are real machines (eg. Asrock B250M-HDV) where VBT has both
|
||||
* port A and port E with the same AUX ch and we must pick port E :(
|
||||
*/
|
||||
child = &i915->vbt.ports[p]->child;
|
||||
|
||||
child->device_type &= ~DEVICE_TYPE_TMDS_DVI_SIGNALING;
|
||||
child->device_type |= DEVICE_TYPE_NOT_HDMI_OUTPUT;
|
||||
|
||||
child->ddc_pin = 0;
|
||||
}
|
||||
|
||||
static enum port get_port_by_aux_ch(struct drm_i915_private *i915, u8 aux_ch)
|
||||
{
|
||||
const struct intel_bios_encoder_data *devdata;
|
||||
enum port port;
|
||||
|
||||
if (!aux_ch)
|
||||
return PORT_NONE;
|
||||
|
||||
for_each_port(port) {
|
||||
devdata = i915->vbt.ports[port];
|
||||
|
||||
if (devdata && aux_ch == devdata->child.aux_channel)
|
||||
return port;
|
||||
}
|
||||
|
||||
return PORT_NONE;
|
||||
}
|
||||
|
||||
static void sanitize_aux_ch(struct intel_bios_encoder_data *devdata,
|
||||
enum port port)
|
||||
{
|
||||
struct drm_i915_private *i915 = devdata->i915;
|
||||
struct child_device_config *child;
|
||||
enum port p;
|
||||
|
||||
p = get_port_by_aux_ch(i915, devdata->child.aux_channel);
|
||||
if (p == PORT_NONE)
|
||||
return;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"port %c trying to use the same AUX CH (0x%x) as port %c, "
|
||||
"disabling port %c DP support\n",
|
||||
port_name(port), devdata->child.aux_channel,
|
||||
port_name(p), port_name(p));
|
||||
|
||||
/*
|
||||
* If we have multiple ports supposedly sharing the aux channel, then DP
|
||||
* couldn't exist on the shared port. Otherwise they share the same aux
|
||||
* channel and system couldn't communicate with them separately.
|
||||
*
|
||||
* Give inverse child device order the priority, last one wins. Yes,
|
||||
* there are real machines (eg. Asrock B250M-HDV) where VBT has both
|
||||
* port A and port E with the same AUX ch and we must pick port E :(
|
||||
*/
|
||||
child = &i915->vbt.ports[p]->child;
|
||||
|
||||
child->device_type &= ~DEVICE_TYPE_DISPLAYPORT_OUTPUT;
|
||||
child->aux_channel = 0;
|
||||
}
|
||||
|
||||
static enum port __dvo_port_to_port(int n_ports, int n_dvo,
|
||||
const int port_mapping[][3], u8 dvo_port)
|
||||
{
|
||||
@ -1815,6 +1840,17 @@ static int parse_bdb_216_dp_max_link_rate(const int vbt_max_link_rate)
|
||||
}
|
||||
}
|
||||
|
||||
static int _intel_bios_dp_max_link_rate(const struct intel_bios_encoder_data *devdata)
|
||||
{
|
||||
if (!devdata || devdata->i915->vbt.version < 216)
|
||||
return 0;
|
||||
|
||||
if (devdata->i915->vbt.version >= 230)
|
||||
return parse_bdb_230_dp_max_link_rate(devdata->child.dp_max_link_rate);
|
||||
else
|
||||
return parse_bdb_216_dp_max_link_rate(devdata->child.dp_max_link_rate);
|
||||
}
|
||||
|
||||
static void sanitize_device_type(struct intel_bios_encoder_data *devdata,
|
||||
enum port port)
|
||||
{
|
||||
@ -1868,6 +1904,32 @@ intel_bios_encoder_supports_edp(const struct intel_bios_encoder_data *devdata)
|
||||
devdata->child.device_type & DEVICE_TYPE_INTERNAL_CONNECTOR;
|
||||
}
|
||||
|
||||
static int _intel_bios_hdmi_level_shift(const struct intel_bios_encoder_data *devdata)
|
||||
{
|
||||
if (!devdata || devdata->i915->vbt.version < 158)
|
||||
return -1;
|
||||
|
||||
return devdata->child.hdmi_level_shifter_value;
|
||||
}
|
||||
|
||||
static int _intel_bios_max_tmds_clock(const struct intel_bios_encoder_data *devdata)
|
||||
{
|
||||
if (!devdata || devdata->i915->vbt.version < 204)
|
||||
return 0;
|
||||
|
||||
switch (devdata->child.hdmi_max_data_rate) {
|
||||
default:
|
||||
MISSING_CASE(devdata->child.hdmi_max_data_rate);
|
||||
fallthrough;
|
||||
case HDMI_MAX_DATA_RATE_PLATFORM:
|
||||
return 0;
|
||||
case HDMI_MAX_DATA_RATE_297:
|
||||
return 297000;
|
||||
case HDMI_MAX_DATA_RATE_165:
|
||||
return 165000;
|
||||
}
|
||||
}
|
||||
|
||||
static bool is_port_valid(struct drm_i915_private *i915, enum port port)
|
||||
{
|
||||
/*
|
||||
@ -1885,9 +1947,8 @@ static void parse_ddi_port(struct drm_i915_private *i915,
|
||||
struct intel_bios_encoder_data *devdata)
|
||||
{
|
||||
const struct child_device_config *child = &devdata->child;
|
||||
struct ddi_vbt_port_info *info;
|
||||
bool is_dvi, is_hdmi, is_dp, is_edp, is_crt, supports_typec_usb, supports_tbt;
|
||||
int dp_boost_level, hdmi_boost_level;
|
||||
int dp_boost_level, dp_max_link_rate, hdmi_boost_level, hdmi_level_shift, max_tmds_clock;
|
||||
enum port port;
|
||||
|
||||
port = dvo_port_to_port(i915, child->dvo_port);
|
||||
@ -1901,9 +1962,7 @@ static void parse_ddi_port(struct drm_i915_private *i915,
|
||||
return;
|
||||
}
|
||||
|
||||
info = &i915->vbt.ddi_port_info[port];
|
||||
|
||||
if (info->devdata) {
|
||||
if (i915->vbt.ports[port]) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"More than one child device for port %c in VBT, using the first.\n",
|
||||
port_name(port));
|
||||
@ -1928,62 +1987,24 @@ static void parse_ddi_port(struct drm_i915_private *i915,
|
||||
supports_typec_usb, supports_tbt,
|
||||
devdata->dsc != NULL);
|
||||
|
||||
if (is_dvi) {
|
||||
u8 ddc_pin;
|
||||
if (is_dvi)
|
||||
sanitize_ddc_pin(devdata, port);
|
||||
|
||||
ddc_pin = map_ddc_pin(i915, child->ddc_pin);
|
||||
if (intel_gmbus_is_valid_pin(i915, ddc_pin)) {
|
||||
info->alternate_ddc_pin = ddc_pin;
|
||||
sanitize_ddc_pin(i915, port);
|
||||
} else {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %c has invalid DDC pin %d, "
|
||||
"sticking to defaults\n",
|
||||
port_name(port), ddc_pin);
|
||||
}
|
||||
}
|
||||
if (is_dp)
|
||||
sanitize_aux_ch(devdata, port);
|
||||
|
||||
if (is_dp) {
|
||||
info->alternate_aux_channel = child->aux_channel;
|
||||
|
||||
sanitize_aux_ch(i915, port);
|
||||
}
|
||||
|
||||
if (i915->vbt.version >= 158) {
|
||||
/* The VBT HDMI level shift values match the table we have. */
|
||||
u8 hdmi_level_shift = child->hdmi_level_shifter_value;
|
||||
hdmi_level_shift = _intel_bios_hdmi_level_shift(devdata);
|
||||
if (hdmi_level_shift >= 0) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %c VBT HDMI level shift: %d\n",
|
||||
port_name(port),
|
||||
hdmi_level_shift);
|
||||
info->hdmi_level_shift = hdmi_level_shift;
|
||||
info->hdmi_level_shift_set = true;
|
||||
port_name(port), hdmi_level_shift);
|
||||
}
|
||||
|
||||
if (i915->vbt.version >= 204) {
|
||||
int max_tmds_clock;
|
||||
|
||||
switch (child->hdmi_max_data_rate) {
|
||||
default:
|
||||
MISSING_CASE(child->hdmi_max_data_rate);
|
||||
fallthrough;
|
||||
case HDMI_MAX_DATA_RATE_PLATFORM:
|
||||
max_tmds_clock = 0;
|
||||
break;
|
||||
case HDMI_MAX_DATA_RATE_297:
|
||||
max_tmds_clock = 297000;
|
||||
break;
|
||||
case HDMI_MAX_DATA_RATE_165:
|
||||
max_tmds_clock = 165000;
|
||||
break;
|
||||
}
|
||||
|
||||
if (max_tmds_clock)
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %c VBT HDMI max TMDS clock: %d kHz\n",
|
||||
port_name(port), max_tmds_clock);
|
||||
info->max_tmds_clock = max_tmds_clock;
|
||||
}
|
||||
max_tmds_clock = _intel_bios_max_tmds_clock(devdata);
|
||||
if (max_tmds_clock)
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %c VBT HDMI max TMDS clock: %d kHz\n",
|
||||
port_name(port), max_tmds_clock);
|
||||
|
||||
/* I_boost config for SKL and above */
|
||||
dp_boost_level = intel_bios_encoder_dp_boost_level(devdata);
|
||||
@ -1998,19 +2019,13 @@ static void parse_ddi_port(struct drm_i915_private *i915,
|
||||
"Port %c VBT HDMI boost level: %d\n",
|
||||
port_name(port), hdmi_boost_level);
|
||||
|
||||
/* DP max link rate for GLK+ */
|
||||
if (i915->vbt.version >= 216) {
|
||||
if (i915->vbt.version >= 230)
|
||||
info->dp_max_link_rate = parse_bdb_230_dp_max_link_rate(child->dp_max_link_rate);
|
||||
else
|
||||
info->dp_max_link_rate = parse_bdb_216_dp_max_link_rate(child->dp_max_link_rate);
|
||||
|
||||
dp_max_link_rate = _intel_bios_dp_max_link_rate(devdata);
|
||||
if (dp_max_link_rate)
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %c VBT DP max link rate: %d\n",
|
||||
port_name(port), info->dp_max_link_rate);
|
||||
}
|
||||
port_name(port), dp_max_link_rate);
|
||||
|
||||
info->devdata = devdata;
|
||||
i915->vbt.ports[port] = devdata;
|
||||
}
|
||||
|
||||
static void parse_ddi_ports(struct drm_i915_private *i915)
|
||||
@ -2548,12 +2563,8 @@ bool intel_bios_is_port_present(struct drm_i915_private *i915, enum port port)
|
||||
[PORT_F] = { DVO_PORT_DPF, DVO_PORT_HDMIF, },
|
||||
};
|
||||
|
||||
if (HAS_DDI(i915)) {
|
||||
const struct ddi_vbt_port_info *port_info =
|
||||
&i915->vbt.ddi_port_info[port];
|
||||
|
||||
return port_info->devdata;
|
||||
}
|
||||
if (HAS_DDI(i915))
|
||||
return i915->vbt.ports[port];
|
||||
|
||||
/* FIXME maybe deal with port A as well? */
|
||||
if (drm_WARN_ON(&i915->drm,
|
||||
@ -2804,8 +2815,7 @@ bool
|
||||
intel_bios_is_port_hpd_inverted(const struct drm_i915_private *i915,
|
||||
enum port port)
|
||||
{
|
||||
const struct intel_bios_encoder_data *devdata =
|
||||
i915->vbt.ddi_port_info[port].devdata;
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[port];
|
||||
|
||||
if (drm_WARN_ON_ONCE(&i915->drm,
|
||||
!IS_GEMINILAKE(i915) && !IS_BROXTON(i915)))
|
||||
@ -2825,8 +2835,7 @@ bool
|
||||
intel_bios_is_lspcon_present(const struct drm_i915_private *i915,
|
||||
enum port port)
|
||||
{
|
||||
const struct intel_bios_encoder_data *devdata =
|
||||
i915->vbt.ddi_port_info[port].devdata;
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[port];
|
||||
|
||||
return HAS_LSPCON(i915) && devdata && devdata->child.lspcon;
|
||||
}
|
||||
@ -2842,8 +2851,7 @@ bool
|
||||
intel_bios_is_lane_reversal_needed(const struct drm_i915_private *i915,
|
||||
enum port port)
|
||||
{
|
||||
const struct intel_bios_encoder_data *devdata =
|
||||
i915->vbt.ddi_port_info[port].devdata;
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[port];
|
||||
|
||||
return devdata && devdata->child.lane_reversal;
|
||||
}
|
||||
@ -2851,11 +2859,10 @@ intel_bios_is_lane_reversal_needed(const struct drm_i915_private *i915,
|
||||
enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *i915,
|
||||
enum port port)
|
||||
{
|
||||
const struct ddi_vbt_port_info *info =
|
||||
&i915->vbt.ddi_port_info[port];
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[port];
|
||||
enum aux_ch aux_ch;
|
||||
|
||||
if (!info->alternate_aux_channel) {
|
||||
if (!devdata || !devdata->child.aux_channel) {
|
||||
aux_ch = (enum aux_ch)port;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
@ -2871,7 +2878,7 @@ enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *i915,
|
||||
* ADL-S VBT uses PHY based mapping. Combo PHYs A,B,C,D,E
|
||||
* map to DDI A,TC1,TC2,TC3,TC4 respectively.
|
||||
*/
|
||||
switch (info->alternate_aux_channel) {
|
||||
switch (devdata->child.aux_channel) {
|
||||
case DP_AUX_A:
|
||||
aux_ch = AUX_CH_A;
|
||||
break;
|
||||
@ -2932,7 +2939,7 @@ enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *i915,
|
||||
aux_ch = AUX_CH_I;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(info->alternate_aux_channel);
|
||||
MISSING_CASE(devdata->child.aux_channel);
|
||||
aux_ch = AUX_CH_A;
|
||||
break;
|
||||
}
|
||||
@ -2946,17 +2953,18 @@ enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *i915,
|
||||
int intel_bios_max_tmds_clock(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[encoder->port];
|
||||
|
||||
return i915->vbt.ddi_port_info[encoder->port].max_tmds_clock;
|
||||
return _intel_bios_max_tmds_clock(devdata);
|
||||
}
|
||||
|
||||
/* This is an index in the HDMI/DVI DDI buffer translation table, or -1 */
|
||||
int intel_bios_hdmi_level_shift(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
const struct ddi_vbt_port_info *info =
|
||||
&i915->vbt.ddi_port_info[encoder->port];
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[encoder->port];
|
||||
|
||||
return info->hdmi_level_shift_set ? info->hdmi_level_shift : -1;
|
||||
return _intel_bios_hdmi_level_shift(devdata);
|
||||
}
|
||||
|
||||
int intel_bios_encoder_dp_boost_level(const struct intel_bios_encoder_data *devdata)
|
||||
@ -2978,15 +2986,20 @@ int intel_bios_encoder_hdmi_boost_level(const struct intel_bios_encoder_data *de
|
||||
int intel_bios_dp_max_link_rate(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[encoder->port];
|
||||
|
||||
return i915->vbt.ddi_port_info[encoder->port].dp_max_link_rate;
|
||||
return _intel_bios_dp_max_link_rate(devdata);
|
||||
}
|
||||
|
||||
int intel_bios_alternate_ddc_pin(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
const struct intel_bios_encoder_data *devdata = i915->vbt.ports[encoder->port];
|
||||
|
||||
return i915->vbt.ddi_port_info[encoder->port].alternate_ddc_pin;
|
||||
if (!devdata || !devdata->child.ddc_pin)
|
||||
return 0;
|
||||
|
||||
return map_ddc_pin(i915, devdata->child.ddc_pin);
|
||||
}
|
||||
|
||||
bool intel_bios_encoder_supports_typec_usb(const struct intel_bios_encoder_data *devdata)
|
||||
@ -3002,5 +3015,5 @@ bool intel_bios_encoder_supports_tbt(const struct intel_bios_encoder_data *devda
|
||||
const struct intel_bios_encoder_data *
|
||||
intel_bios_encoder_data_lookup(struct drm_i915_private *i915, enum port port)
|
||||
{
|
||||
return i915->vbt.ddi_port_info[port].devdata;
|
||||
return i915->vbt.ports[port];
|
||||
}
|
||||
|
@ -222,31 +222,42 @@ static int icl_sagv_max_dclk(const struct intel_qgv_info *qi)
|
||||
|
||||
struct intel_sa_info {
|
||||
u16 displayrtids;
|
||||
u8 deburst, deprogbwlimit;
|
||||
u8 deburst, deprogbwlimit, derating;
|
||||
};
|
||||
|
||||
static const struct intel_sa_info icl_sa_info = {
|
||||
.deburst = 8,
|
||||
.deprogbwlimit = 25, /* GB/s */
|
||||
.displayrtids = 128,
|
||||
.derating = 10,
|
||||
};
|
||||
|
||||
static const struct intel_sa_info tgl_sa_info = {
|
||||
.deburst = 16,
|
||||
.deprogbwlimit = 34, /* GB/s */
|
||||
.displayrtids = 256,
|
||||
.derating = 10,
|
||||
};
|
||||
|
||||
static const struct intel_sa_info rkl_sa_info = {
|
||||
.deburst = 16,
|
||||
.deprogbwlimit = 20, /* GB/s */
|
||||
.displayrtids = 128,
|
||||
.derating = 10,
|
||||
};
|
||||
|
||||
static const struct intel_sa_info adls_sa_info = {
|
||||
.deburst = 16,
|
||||
.deprogbwlimit = 38, /* GB/s */
|
||||
.displayrtids = 256,
|
||||
.derating = 10,
|
||||
};
|
||||
|
||||
static const struct intel_sa_info adlp_sa_info = {
|
||||
.deburst = 16,
|
||||
.deprogbwlimit = 38, /* GB/s */
|
||||
.displayrtids = 256,
|
||||
.derating = 20,
|
||||
};
|
||||
|
||||
static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel_sa_info *sa)
|
||||
@ -302,7 +313,7 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
|
||||
bw = icl_calc_bw(sp->dclk, clpchgroup * 32 * num_channels, ct);
|
||||
|
||||
bi->deratedbw[j] = min(maxdebw,
|
||||
bw * 9 / 10); /* 90% */
|
||||
bw * (100 - sa->derating) / 100);
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"BW%d / QGV %d: num_planes=%d deratedbw=%u\n",
|
||||
@ -400,7 +411,9 @@ void intel_bw_init_hw(struct drm_i915_private *dev_priv)
|
||||
|
||||
if (IS_DG2(dev_priv))
|
||||
dg2_get_bw_info(dev_priv);
|
||||
else if (IS_ALDERLAKE_S(dev_priv) || IS_ALDERLAKE_P(dev_priv))
|
||||
else if (IS_ALDERLAKE_P(dev_priv))
|
||||
icl_get_bw_info(dev_priv, &adlp_sa_info);
|
||||
else if (IS_ALDERLAKE_S(dev_priv))
|
||||
icl_get_bw_info(dev_priv, &adls_sa_info);
|
||||
else if (IS_ROCKETLAKE(dev_priv))
|
||||
icl_get_bw_info(dev_priv, &rkl_sa_info);
|
||||
|
@ -59,6 +59,37 @@
|
||||
* dividers can be programmed correctly.
|
||||
*/
|
||||
|
||||
void intel_cdclk_get_cdclk(struct drm_i915_private *dev_priv,
|
||||
struct intel_cdclk_config *cdclk_config)
|
||||
{
|
||||
dev_priv->cdclk_funcs->get_cdclk(dev_priv, cdclk_config);
|
||||
}
|
||||
|
||||
int intel_cdclk_bw_calc_min_cdclk(struct intel_atomic_state *state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
return dev_priv->cdclk_funcs->bw_calc_min_cdclk(state);
|
||||
}
|
||||
|
||||
static void intel_cdclk_set_cdclk(struct drm_i915_private *dev_priv,
|
||||
const struct intel_cdclk_config *cdclk_config,
|
||||
enum pipe pipe)
|
||||
{
|
||||
dev_priv->cdclk_funcs->set_cdclk(dev_priv, cdclk_config, pipe);
|
||||
}
|
||||
|
||||
static int intel_cdclk_modeset_calc_cdclk(struct drm_i915_private *dev_priv,
|
||||
struct intel_cdclk_state *cdclk_config)
|
||||
{
|
||||
return dev_priv->cdclk_funcs->modeset_calc_cdclk(cdclk_config);
|
||||
}
|
||||
|
||||
static u8 intel_cdclk_calc_voltage_level(struct drm_i915_private *dev_priv,
|
||||
int cdclk)
|
||||
{
|
||||
return dev_priv->cdclk_funcs->calc_voltage_level(cdclk);
|
||||
}
|
||||
|
||||
static void fixed_133mhz_get_cdclk(struct drm_i915_private *dev_priv,
|
||||
struct intel_cdclk_config *cdclk_config)
|
||||
{
|
||||
@ -1466,7 +1497,7 @@ static void bxt_get_cdclk(struct drm_i915_private *dev_priv,
|
||||
* at least what the CDCLK frequency requires.
|
||||
*/
|
||||
cdclk_config->voltage_level =
|
||||
dev_priv->display.calc_voltage_level(cdclk_config->cdclk);
|
||||
intel_cdclk_calc_voltage_level(dev_priv, cdclk_config->cdclk);
|
||||
}
|
||||
|
||||
static void bxt_de_pll_disable(struct drm_i915_private *dev_priv)
|
||||
@ -1777,7 +1808,7 @@ static void bxt_cdclk_init_hw(struct drm_i915_private *dev_priv)
|
||||
cdclk_config.cdclk = bxt_calc_cdclk(dev_priv, 0);
|
||||
cdclk_config.vco = bxt_calc_cdclk_pll_vco(dev_priv, cdclk_config.cdclk);
|
||||
cdclk_config.voltage_level =
|
||||
dev_priv->display.calc_voltage_level(cdclk_config.cdclk);
|
||||
intel_cdclk_calc_voltage_level(dev_priv, cdclk_config.cdclk);
|
||||
|
||||
bxt_set_cdclk(dev_priv, &cdclk_config, INVALID_PIPE);
|
||||
}
|
||||
@ -1789,7 +1820,7 @@ static void bxt_cdclk_uninit_hw(struct drm_i915_private *dev_priv)
|
||||
cdclk_config.cdclk = cdclk_config.bypass;
|
||||
cdclk_config.vco = 0;
|
||||
cdclk_config.voltage_level =
|
||||
dev_priv->display.calc_voltage_level(cdclk_config.cdclk);
|
||||
intel_cdclk_calc_voltage_level(dev_priv, cdclk_config.cdclk);
|
||||
|
||||
bxt_set_cdclk(dev_priv, &cdclk_config, INVALID_PIPE);
|
||||
}
|
||||
@ -1932,7 +1963,7 @@ static void intel_set_cdclk(struct drm_i915_private *dev_priv,
|
||||
if (!intel_cdclk_changed(&dev_priv->cdclk.hw, cdclk_config))
|
||||
return;
|
||||
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !dev_priv->display.set_cdclk))
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !dev_priv->cdclk_funcs->set_cdclk))
|
||||
return;
|
||||
|
||||
intel_dump_cdclk_config(cdclk_config, "Changing CDCLK to");
|
||||
@ -1956,7 +1987,7 @@ static void intel_set_cdclk(struct drm_i915_private *dev_priv,
|
||||
&dev_priv->gmbus_mutex);
|
||||
}
|
||||
|
||||
dev_priv->display.set_cdclk(dev_priv, cdclk_config, pipe);
|
||||
intel_cdclk_set_cdclk(dev_priv, cdclk_config, pipe);
|
||||
|
||||
for_each_intel_dp(&dev_priv->drm, encoder) {
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
@ -2139,6 +2170,14 @@ int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state)
|
||||
/* Account for additional needs from the planes */
|
||||
min_cdclk = max(intel_planes_min_cdclk(crtc_state), min_cdclk);
|
||||
|
||||
/*
|
||||
* When we decide to use only one VDSC engine, since
|
||||
* each VDSC operates with 1 ppc throughput, pixel clock
|
||||
* cannot be higher than the VDSC clock (cdclk)
|
||||
*/
|
||||
if (crtc_state->dsc.compression_enable && !crtc_state->dsc.dsc_split)
|
||||
min_cdclk = max(min_cdclk, (int)crtc_state->pixel_rate);
|
||||
|
||||
/*
|
||||
* HACK. Currently for TGL platforms we calculate
|
||||
* min_cdclk initially based on pixel_rate divided
|
||||
@ -2414,7 +2453,7 @@ static int bxt_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
|
||||
cdclk_state->logical.cdclk = cdclk;
|
||||
cdclk_state->logical.voltage_level =
|
||||
max_t(int, min_voltage_level,
|
||||
dev_priv->display.calc_voltage_level(cdclk));
|
||||
intel_cdclk_calc_voltage_level(dev_priv, cdclk));
|
||||
|
||||
if (!cdclk_state->active_pipes) {
|
||||
cdclk = bxt_calc_cdclk(dev_priv, cdclk_state->force_min_cdclk);
|
||||
@ -2423,7 +2462,7 @@ static int bxt_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
|
||||
cdclk_state->actual.vco = vco;
|
||||
cdclk_state->actual.cdclk = cdclk;
|
||||
cdclk_state->actual.voltage_level =
|
||||
dev_priv->display.calc_voltage_level(cdclk);
|
||||
intel_cdclk_calc_voltage_level(dev_priv, cdclk);
|
||||
} else {
|
||||
cdclk_state->actual = cdclk_state->logical;
|
||||
}
|
||||
@ -2515,7 +2554,7 @@ int intel_modeset_calc_cdclk(struct intel_atomic_state *state)
|
||||
new_cdclk_state->active_pipes =
|
||||
intel_calc_active_pipes(state, old_cdclk_state->active_pipes);
|
||||
|
||||
ret = dev_priv->display.modeset_calc_cdclk(new_cdclk_state);
|
||||
ret = intel_cdclk_modeset_calc_cdclk(dev_priv, new_cdclk_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -2695,7 +2734,7 @@ void intel_update_max_cdclk(struct drm_i915_private *dev_priv)
|
||||
*/
|
||||
void intel_update_cdclk(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
dev_priv->display.get_cdclk(dev_priv, &dev_priv->cdclk.hw);
|
||||
intel_cdclk_get_cdclk(dev_priv, &dev_priv->cdclk.hw);
|
||||
|
||||
/*
|
||||
* 9:0 CMBUS [sic] CDCLK frequency (cdfreq):
|
||||
@ -2845,6 +2884,157 @@ u32 intel_read_rawclk(struct drm_i915_private *dev_priv)
|
||||
return freq;
|
||||
}
|
||||
|
||||
static struct intel_cdclk_funcs tgl_cdclk_funcs = {
|
||||
.get_cdclk = bxt_get_cdclk,
|
||||
.set_cdclk = bxt_set_cdclk,
|
||||
.bw_calc_min_cdclk = skl_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = bxt_modeset_calc_cdclk,
|
||||
.calc_voltage_level = tgl_calc_voltage_level,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs ehl_cdclk_funcs = {
|
||||
.get_cdclk = bxt_get_cdclk,
|
||||
.set_cdclk = bxt_set_cdclk,
|
||||
.bw_calc_min_cdclk = skl_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = bxt_modeset_calc_cdclk,
|
||||
.calc_voltage_level = ehl_calc_voltage_level,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs icl_cdclk_funcs = {
|
||||
.get_cdclk = bxt_get_cdclk,
|
||||
.set_cdclk = bxt_set_cdclk,
|
||||
.bw_calc_min_cdclk = skl_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = bxt_modeset_calc_cdclk,
|
||||
.calc_voltage_level = icl_calc_voltage_level,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs bxt_cdclk_funcs = {
|
||||
.get_cdclk = bxt_get_cdclk,
|
||||
.set_cdclk = bxt_set_cdclk,
|
||||
.bw_calc_min_cdclk = skl_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = bxt_modeset_calc_cdclk,
|
||||
.calc_voltage_level = bxt_calc_voltage_level,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs skl_cdclk_funcs = {
|
||||
.get_cdclk = skl_get_cdclk,
|
||||
.set_cdclk = skl_set_cdclk,
|
||||
.bw_calc_min_cdclk = skl_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = skl_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs bdw_cdclk_funcs = {
|
||||
.get_cdclk = bdw_get_cdclk,
|
||||
.set_cdclk = bdw_set_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = bdw_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs chv_cdclk_funcs = {
|
||||
.get_cdclk = vlv_get_cdclk,
|
||||
.set_cdclk = chv_set_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = vlv_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs vlv_cdclk_funcs = {
|
||||
.get_cdclk = vlv_get_cdclk,
|
||||
.set_cdclk = vlv_set_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = vlv_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs hsw_cdclk_funcs = {
|
||||
.get_cdclk = hsw_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
/* SNB, IVB, 965G, 945G */
|
||||
static struct intel_cdclk_funcs fixed_400mhz_cdclk_funcs = {
|
||||
.get_cdclk = fixed_400mhz_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs ilk_cdclk_funcs = {
|
||||
.get_cdclk = fixed_450mhz_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs gm45_cdclk_funcs = {
|
||||
.get_cdclk = gm45_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
/* G45 uses G33 */
|
||||
|
||||
static struct intel_cdclk_funcs i965gm_cdclk_funcs = {
|
||||
.get_cdclk = i965gm_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
/* i965G uses fixed 400 */
|
||||
|
||||
static struct intel_cdclk_funcs pnv_cdclk_funcs = {
|
||||
.get_cdclk = pnv_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs g33_cdclk_funcs = {
|
||||
.get_cdclk = g33_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs i945gm_cdclk_funcs = {
|
||||
.get_cdclk = i945gm_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
/* i945G uses fixed 400 */
|
||||
|
||||
static struct intel_cdclk_funcs i915gm_cdclk_funcs = {
|
||||
.get_cdclk = i915gm_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs i915g_cdclk_funcs = {
|
||||
.get_cdclk = fixed_333mhz_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs i865g_cdclk_funcs = {
|
||||
.get_cdclk = fixed_266mhz_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs i85x_cdclk_funcs = {
|
||||
.get_cdclk = i85x_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs i845g_cdclk_funcs = {
|
||||
.get_cdclk = fixed_200mhz_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
static struct intel_cdclk_funcs i830_cdclk_funcs = {
|
||||
.get_cdclk = fixed_133mhz_get_cdclk,
|
||||
.bw_calc_min_cdclk = intel_bw_calc_min_cdclk,
|
||||
.modeset_calc_cdclk = fixed_modeset_calc_cdclk,
|
||||
};
|
||||
|
||||
/**
|
||||
* intel_init_cdclk_hooks - Initialize CDCLK related modesetting hooks
|
||||
* @dev_priv: i915 device
|
||||
@ -2852,119 +3042,78 @@ u32 intel_read_rawclk(struct drm_i915_private *dev_priv)
|
||||
void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (IS_DG2(dev_priv)) {
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = tgl_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &tgl_cdclk_funcs;
|
||||
dev_priv->cdclk.table = dg2_cdclk_table;
|
||||
} else if (IS_ALDERLAKE_P(dev_priv)) {
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = tgl_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &tgl_cdclk_funcs;
|
||||
/* Wa_22011320316:adl-p[a0] */
|
||||
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
|
||||
dev_priv->cdclk.table = adlp_a_step_cdclk_table;
|
||||
else
|
||||
dev_priv->cdclk.table = adlp_cdclk_table;
|
||||
} else if (IS_ROCKETLAKE(dev_priv)) {
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = tgl_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &tgl_cdclk_funcs;
|
||||
dev_priv->cdclk.table = rkl_cdclk_table;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 12) {
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = tgl_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &tgl_cdclk_funcs;
|
||||
dev_priv->cdclk.table = icl_cdclk_table;
|
||||
} else if (IS_JSL_EHL(dev_priv)) {
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = ehl_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &ehl_cdclk_funcs;
|
||||
dev_priv->cdclk.table = icl_cdclk_table;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 11) {
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = icl_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &icl_cdclk_funcs;
|
||||
dev_priv->cdclk.table = icl_cdclk_table;
|
||||
} else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) {
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.set_cdclk = bxt_set_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bxt_modeset_calc_cdclk;
|
||||
dev_priv->display.calc_voltage_level = bxt_calc_voltage_level;
|
||||
dev_priv->cdclk_funcs = &bxt_cdclk_funcs;
|
||||
if (IS_GEMINILAKE(dev_priv))
|
||||
dev_priv->cdclk.table = glk_cdclk_table;
|
||||
else
|
||||
dev_priv->cdclk.table = bxt_cdclk_table;
|
||||
} else if (DISPLAY_VER(dev_priv) == 9) {
|
||||
dev_priv->display.bw_calc_min_cdclk = skl_bw_calc_min_cdclk;
|
||||
dev_priv->display.set_cdclk = skl_set_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = skl_modeset_calc_cdclk;
|
||||
dev_priv->cdclk_funcs = &skl_cdclk_funcs;
|
||||
} else if (IS_BROADWELL(dev_priv)) {
|
||||
dev_priv->display.bw_calc_min_cdclk = intel_bw_calc_min_cdclk;
|
||||
dev_priv->display.set_cdclk = bdw_set_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = bdw_modeset_calc_cdclk;
|
||||
dev_priv->cdclk_funcs = &bdw_cdclk_funcs;
|
||||
} else if (IS_HASWELL(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &hsw_cdclk_funcs;
|
||||
} else if (IS_CHERRYVIEW(dev_priv)) {
|
||||
dev_priv->display.bw_calc_min_cdclk = intel_bw_calc_min_cdclk;
|
||||
dev_priv->display.set_cdclk = chv_set_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = vlv_modeset_calc_cdclk;
|
||||
dev_priv->cdclk_funcs = &chv_cdclk_funcs;
|
||||
} else if (IS_VALLEYVIEW(dev_priv)) {
|
||||
dev_priv->display.bw_calc_min_cdclk = intel_bw_calc_min_cdclk;
|
||||
dev_priv->display.set_cdclk = vlv_set_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = vlv_modeset_calc_cdclk;
|
||||
} else {
|
||||
dev_priv->display.bw_calc_min_cdclk = intel_bw_calc_min_cdclk;
|
||||
dev_priv->display.modeset_calc_cdclk = fixed_modeset_calc_cdclk;
|
||||
dev_priv->cdclk_funcs = &vlv_cdclk_funcs;
|
||||
} else if (IS_SANDYBRIDGE(dev_priv) || IS_IVYBRIDGE(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &fixed_400mhz_cdclk_funcs;
|
||||
} else if (IS_IRONLAKE(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &ilk_cdclk_funcs;
|
||||
} else if (IS_GM45(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &gm45_cdclk_funcs;
|
||||
} else if (IS_G45(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &g33_cdclk_funcs;
|
||||
} else if (IS_I965GM(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i965gm_cdclk_funcs;
|
||||
} else if (IS_I965G(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &fixed_400mhz_cdclk_funcs;
|
||||
} else if (IS_PINEVIEW(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &pnv_cdclk_funcs;
|
||||
} else if (IS_G33(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &g33_cdclk_funcs;
|
||||
} else if (IS_I945GM(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i945gm_cdclk_funcs;
|
||||
} else if (IS_I945G(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &fixed_400mhz_cdclk_funcs;
|
||||
} else if (IS_I915GM(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i915gm_cdclk_funcs;
|
||||
} else if (IS_I915G(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i915g_cdclk_funcs;
|
||||
} else if (IS_I865G(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i865g_cdclk_funcs;
|
||||
} else if (IS_I85X(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i85x_cdclk_funcs;
|
||||
} else if (IS_I845G(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i845g_cdclk_funcs;
|
||||
} else if (IS_I830(dev_priv)) {
|
||||
dev_priv->cdclk_funcs = &i830_cdclk_funcs;
|
||||
}
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 10 || IS_BROXTON(dev_priv))
|
||||
dev_priv->display.get_cdclk = bxt_get_cdclk;
|
||||
else if (DISPLAY_VER(dev_priv) == 9)
|
||||
dev_priv->display.get_cdclk = skl_get_cdclk;
|
||||
else if (IS_BROADWELL(dev_priv))
|
||||
dev_priv->display.get_cdclk = bdw_get_cdclk;
|
||||
else if (IS_HASWELL(dev_priv))
|
||||
dev_priv->display.get_cdclk = hsw_get_cdclk;
|
||||
else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
|
||||
dev_priv->display.get_cdclk = vlv_get_cdclk;
|
||||
else if (IS_SANDYBRIDGE(dev_priv) || IS_IVYBRIDGE(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_400mhz_get_cdclk;
|
||||
else if (IS_IRONLAKE(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_450mhz_get_cdclk;
|
||||
else if (IS_GM45(dev_priv))
|
||||
dev_priv->display.get_cdclk = gm45_get_cdclk;
|
||||
else if (IS_G45(dev_priv))
|
||||
dev_priv->display.get_cdclk = g33_get_cdclk;
|
||||
else if (IS_I965GM(dev_priv))
|
||||
dev_priv->display.get_cdclk = i965gm_get_cdclk;
|
||||
else if (IS_I965G(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_400mhz_get_cdclk;
|
||||
else if (IS_PINEVIEW(dev_priv))
|
||||
dev_priv->display.get_cdclk = pnv_get_cdclk;
|
||||
else if (IS_G33(dev_priv))
|
||||
dev_priv->display.get_cdclk = g33_get_cdclk;
|
||||
else if (IS_I945GM(dev_priv))
|
||||
dev_priv->display.get_cdclk = i945gm_get_cdclk;
|
||||
else if (IS_I945G(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_400mhz_get_cdclk;
|
||||
else if (IS_I915GM(dev_priv))
|
||||
dev_priv->display.get_cdclk = i915gm_get_cdclk;
|
||||
else if (IS_I915G(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_333mhz_get_cdclk;
|
||||
else if (IS_I865G(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_266mhz_get_cdclk;
|
||||
else if (IS_I85X(dev_priv))
|
||||
dev_priv->display.get_cdclk = i85x_get_cdclk;
|
||||
else if (IS_I845G(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_200mhz_get_cdclk;
|
||||
else if (IS_I830(dev_priv))
|
||||
dev_priv->display.get_cdclk = fixed_133mhz_get_cdclk;
|
||||
|
||||
if (drm_WARN(&dev_priv->drm, !dev_priv->display.get_cdclk,
|
||||
"Unknown platform. Assuming 133 MHz CDCLK\n"))
|
||||
dev_priv->display.get_cdclk = fixed_133mhz_get_cdclk;
|
||||
if (drm_WARN(&dev_priv->drm, !dev_priv->cdclk_funcs,
|
||||
"Unknown platform. Assuming i830\n"))
|
||||
dev_priv->cdclk_funcs = &i830_cdclk_funcs;
|
||||
}
|
||||
|
@ -68,7 +68,9 @@ void intel_set_cdclk_post_plane_update(struct intel_atomic_state *state);
|
||||
void intel_dump_cdclk_config(const struct intel_cdclk_config *cdclk_config,
|
||||
const char *context);
|
||||
int intel_modeset_calc_cdclk(struct intel_atomic_state *state);
|
||||
|
||||
void intel_cdclk_get_cdclk(struct drm_i915_private *dev_priv,
|
||||
struct intel_cdclk_config *cdclk_config);
|
||||
int intel_cdclk_bw_calc_min_cdclk(struct intel_atomic_state *state);
|
||||
struct intel_cdclk_state *
|
||||
intel_atomic_get_cdclk_state(struct intel_atomic_state *state);
|
||||
|
||||
|
@ -25,6 +25,8 @@
|
||||
#include "intel_color.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dpll.h"
|
||||
#include "intel_dsi.h"
|
||||
|
||||
#define CTM_COEFF_SIGN (1ULL << 63)
|
||||
|
||||
@ -1137,14 +1139,14 @@ void intel_color_load_luts(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
|
||||
|
||||
dev_priv->display.load_luts(crtc_state);
|
||||
dev_priv->color_funcs->load_luts(crtc_state);
|
||||
}
|
||||
|
||||
void intel_color_commit(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
|
||||
|
||||
dev_priv->display.color_commit(crtc_state);
|
||||
dev_priv->color_funcs->color_commit(crtc_state);
|
||||
}
|
||||
|
||||
static bool intel_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
|
||||
@ -1200,15 +1202,15 @@ int intel_color_check(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
|
||||
|
||||
return dev_priv->display.color_check(crtc_state);
|
||||
return dev_priv->color_funcs->color_check(crtc_state);
|
||||
}
|
||||
|
||||
void intel_color_get_config(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
|
||||
|
||||
if (dev_priv->display.read_luts)
|
||||
dev_priv->display.read_luts(crtc_state);
|
||||
if (dev_priv->color_funcs->read_luts)
|
||||
dev_priv->color_funcs->read_luts(crtc_state);
|
||||
}
|
||||
|
||||
static bool need_plane_update(struct intel_plane *plane,
|
||||
@ -2092,6 +2094,76 @@ static void icl_read_luts(struct intel_crtc_state *crtc_state)
|
||||
}
|
||||
}
|
||||
|
||||
static const struct intel_color_funcs chv_color_funcs = {
|
||||
.color_check = chv_color_check,
|
||||
.color_commit = i9xx_color_commit,
|
||||
.load_luts = chv_load_luts,
|
||||
.read_luts = chv_read_luts,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs i965_color_funcs = {
|
||||
.color_check = i9xx_color_check,
|
||||
.color_commit = i9xx_color_commit,
|
||||
.load_luts = i965_load_luts,
|
||||
.read_luts = i965_read_luts,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs i9xx_color_funcs = {
|
||||
.color_check = i9xx_color_check,
|
||||
.color_commit = i9xx_color_commit,
|
||||
.load_luts = i9xx_load_luts,
|
||||
.read_luts = i9xx_read_luts,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs icl_color_funcs = {
|
||||
.color_check = icl_color_check,
|
||||
.color_commit = skl_color_commit,
|
||||
.load_luts = icl_load_luts,
|
||||
.read_luts = icl_read_luts,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs glk_color_funcs = {
|
||||
.color_check = glk_color_check,
|
||||
.color_commit = skl_color_commit,
|
||||
.load_luts = glk_load_luts,
|
||||
.read_luts = glk_read_luts,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs skl_color_funcs = {
|
||||
.color_check = ivb_color_check,
|
||||
.color_commit = skl_color_commit,
|
||||
.load_luts = bdw_load_luts,
|
||||
.read_luts = NULL,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs bdw_color_funcs = {
|
||||
.color_check = ivb_color_check,
|
||||
.color_commit = hsw_color_commit,
|
||||
.load_luts = bdw_load_luts,
|
||||
.read_luts = NULL,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs hsw_color_funcs = {
|
||||
.color_check = ivb_color_check,
|
||||
.color_commit = hsw_color_commit,
|
||||
.load_luts = ivb_load_luts,
|
||||
.read_luts = NULL,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs ivb_color_funcs = {
|
||||
.color_check = ivb_color_check,
|
||||
.color_commit = ilk_color_commit,
|
||||
.load_luts = ivb_load_luts,
|
||||
.read_luts = NULL,
|
||||
};
|
||||
|
||||
static const struct intel_color_funcs ilk_color_funcs = {
|
||||
.color_check = ilk_color_check,
|
||||
.color_commit = ilk_color_commit,
|
||||
.load_luts = ilk_load_luts,
|
||||
.read_luts = ilk_read_luts,
|
||||
};
|
||||
|
||||
void intel_color_init(struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
@ -2101,52 +2173,28 @@ void intel_color_init(struct intel_crtc *crtc)
|
||||
|
||||
if (HAS_GMCH(dev_priv)) {
|
||||
if (IS_CHERRYVIEW(dev_priv)) {
|
||||
dev_priv->display.color_check = chv_color_check;
|
||||
dev_priv->display.color_commit = i9xx_color_commit;
|
||||
dev_priv->display.load_luts = chv_load_luts;
|
||||
dev_priv->display.read_luts = chv_read_luts;
|
||||
dev_priv->color_funcs = &chv_color_funcs;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 4) {
|
||||
dev_priv->display.color_check = i9xx_color_check;
|
||||
dev_priv->display.color_commit = i9xx_color_commit;
|
||||
dev_priv->display.load_luts = i965_load_luts;
|
||||
dev_priv->display.read_luts = i965_read_luts;
|
||||
dev_priv->color_funcs = &i965_color_funcs;
|
||||
} else {
|
||||
dev_priv->display.color_check = i9xx_color_check;
|
||||
dev_priv->display.color_commit = i9xx_color_commit;
|
||||
dev_priv->display.load_luts = i9xx_load_luts;
|
||||
dev_priv->display.read_luts = i9xx_read_luts;
|
||||
dev_priv->color_funcs = &i9xx_color_funcs;
|
||||
}
|
||||
} else {
|
||||
if (DISPLAY_VER(dev_priv) >= 11)
|
||||
dev_priv->display.color_check = icl_color_check;
|
||||
else if (DISPLAY_VER(dev_priv) >= 10)
|
||||
dev_priv->display.color_check = glk_color_check;
|
||||
else if (DISPLAY_VER(dev_priv) >= 7)
|
||||
dev_priv->display.color_check = ivb_color_check;
|
||||
else
|
||||
dev_priv->display.color_check = ilk_color_check;
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
dev_priv->display.color_commit = skl_color_commit;
|
||||
else if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv))
|
||||
dev_priv->display.color_commit = hsw_color_commit;
|
||||
else
|
||||
dev_priv->display.color_commit = ilk_color_commit;
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 11) {
|
||||
dev_priv->display.load_luts = icl_load_luts;
|
||||
dev_priv->display.read_luts = icl_read_luts;
|
||||
} else if (DISPLAY_VER(dev_priv) == 10) {
|
||||
dev_priv->display.load_luts = glk_load_luts;
|
||||
dev_priv->display.read_luts = glk_read_luts;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 8) {
|
||||
dev_priv->display.load_luts = bdw_load_luts;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 7) {
|
||||
dev_priv->display.load_luts = ivb_load_luts;
|
||||
} else {
|
||||
dev_priv->display.load_luts = ilk_load_luts;
|
||||
dev_priv->display.read_luts = ilk_read_luts;
|
||||
}
|
||||
dev_priv->color_funcs = &icl_color_funcs;
|
||||
else if (DISPLAY_VER(dev_priv) == 10)
|
||||
dev_priv->color_funcs = &glk_color_funcs;
|
||||
else if (DISPLAY_VER(dev_priv) == 9)
|
||||
dev_priv->color_funcs = &skl_color_funcs;
|
||||
else if (DISPLAY_VER(dev_priv) == 8)
|
||||
dev_priv->color_funcs = &bdw_color_funcs;
|
||||
else if (DISPLAY_VER(dev_priv) == 7) {
|
||||
if (IS_HASWELL(dev_priv))
|
||||
dev_priv->color_funcs = &hsw_color_funcs;
|
||||
else
|
||||
dev_priv->color_funcs = &ivb_color_funcs;
|
||||
} else
|
||||
dev_priv->color_funcs = &ilk_color_funcs;
|
||||
}
|
||||
|
||||
drm_crtc_enable_color_mgmt(&crtc->base,
|
||||
|
@ -29,13 +29,13 @@
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_edid.h>
|
||||
|
||||
#include "display/intel_panel.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_connector.h"
|
||||
#include "intel_display_debugfs.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_hdcp.h"
|
||||
#include "intel_panel.h"
|
||||
|
||||
int intel_connector_init(struct intel_connector *connector)
|
||||
{
|
||||
@ -124,7 +124,7 @@ int intel_connector_register(struct drm_connector *connector)
|
||||
goto err_backlight;
|
||||
}
|
||||
|
||||
intel_connector_debugfs_add(connector);
|
||||
intel_connector_debugfs_add(intel_connector);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -251,7 +251,7 @@ static void hsw_post_disable_crt(struct intel_atomic_state *state,
|
||||
|
||||
intel_crtc_vblank_off(old_crtc_state);
|
||||
|
||||
intel_disable_pipe(old_crtc_state);
|
||||
intel_disable_transcoder(old_crtc_state);
|
||||
|
||||
intel_ddi_disable_transcoder_func(old_crtc_state);
|
||||
|
||||
@ -314,7 +314,7 @@ static void hsw_enable_crt(struct intel_atomic_state *state,
|
||||
|
||||
intel_ddi_enable_transcoder_func(encoder, crtc_state);
|
||||
|
||||
intel_enable_pipe(crtc_state);
|
||||
intel_enable_transcoder(crtc_state);
|
||||
|
||||
lpt_pch_enable(crtc_state);
|
||||
|
||||
|
@ -536,8 +536,10 @@ static void i9xx_update_cursor(struct intel_plane *plane,
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
skl_write_cursor_wm(plane, crtc_state);
|
||||
|
||||
if (!intel_crtc_needs_modeset(crtc_state))
|
||||
if (plane_state)
|
||||
intel_psr2_program_plane_sel_fetch(plane, crtc_state, plane_state, 0);
|
||||
else
|
||||
intel_psr2_disable_plane_sel_fetch(plane, crtc_state);
|
||||
|
||||
if (plane->cursor.base != base ||
|
||||
plane->cursor.size != fbc_ctl ||
|
||||
@ -637,8 +639,7 @@ intel_legacy_cursor_update(struct drm_plane *_plane,
|
||||
* FIXME bigjoiner fastpath would be good
|
||||
*/
|
||||
if (!crtc_state->hw.active || intel_crtc_needs_modeset(crtc_state) ||
|
||||
crtc_state->update_pipe || crtc_state->bigjoiner ||
|
||||
crtc_state->enable_psr2_sel_fetch)
|
||||
crtc_state->update_pipe || crtc_state->bigjoiner)
|
||||
goto slow;
|
||||
|
||||
/*
|
||||
@ -696,7 +697,7 @@ intel_legacy_cursor_update(struct drm_plane *_plane,
|
||||
goto out_free;
|
||||
|
||||
intel_frontbuffer_flush(to_intel_frontbuffer(new_plane_state->hw.fb),
|
||||
ORIGIN_FLIP);
|
||||
ORIGIN_CURSOR_UPDATE);
|
||||
intel_frontbuffer_track(to_intel_frontbuffer(old_plane_state->hw.fb),
|
||||
to_intel_frontbuffer(new_plane_state->hw.fb),
|
||||
plane->frontbuffer_bit);
|
||||
|
@ -29,6 +29,7 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_audio.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_combo_phy.h"
|
||||
#include "intel_connector.h"
|
||||
#include "intel_crtc.h"
|
||||
@ -40,6 +41,7 @@
|
||||
#include "intel_dp_link_training.h"
|
||||
#include "intel_dp_mst.h"
|
||||
#include "intel_dpio_phy.h"
|
||||
#include "intel_drrs.h"
|
||||
#include "intel_dsi.h"
|
||||
#include "intel_fdi.h"
|
||||
#include "intel_fifo_underrun.h"
|
||||
@ -48,7 +50,6 @@
|
||||
#include "intel_hdmi.h"
|
||||
#include "intel_hotplug.h"
|
||||
#include "intel_lspcon.h"
|
||||
#include "intel_panel.h"
|
||||
#include "intel_pps.h"
|
||||
#include "intel_psr.h"
|
||||
#include "intel_snps_phy.h"
|
||||
@ -73,24 +74,27 @@ static const u8 index_to_dp_signal_levels[] = {
|
||||
};
|
||||
|
||||
static int intel_ddi_hdmi_level(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
const struct intel_ddi_buf_trans *trans)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
int n_entries, level, default_entry;
|
||||
int level;
|
||||
|
||||
n_entries = intel_ddi_hdmi_num_entries(encoder, crtc_state, &default_entry);
|
||||
if (n_entries == 0)
|
||||
return 0;
|
||||
level = intel_bios_hdmi_level_shift(encoder);
|
||||
if (level < 0)
|
||||
level = default_entry;
|
||||
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
level = trans->hdmi_default_entry;
|
||||
|
||||
return level;
|
||||
}
|
||||
|
||||
static bool has_buf_trans_select(struct drm_i915_private *i915)
|
||||
{
|
||||
return DISPLAY_VER(i915) < 10 && !IS_BROXTON(i915);
|
||||
}
|
||||
|
||||
static bool has_iboost(struct drm_i915_private *i915)
|
||||
{
|
||||
return DISPLAY_VER(i915) == 9 && !IS_BROXTON(i915);
|
||||
}
|
||||
|
||||
/*
|
||||
* Starting with Haswell, DDI port buffers must be programmed with correct
|
||||
* values in advance. This function programs the correct values for
|
||||
@ -103,22 +107,22 @@ void hsw_prepare_dp_ddi_buffers(struct intel_encoder *encoder,
|
||||
u32 iboost_bit = 0;
|
||||
int i, n_entries;
|
||||
enum port port = encoder->port;
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
|
||||
/* If we're boosting the current, set bit 31 of trans1 */
|
||||
if (DISPLAY_VER(dev_priv) == 9 && !IS_BROXTON(dev_priv) &&
|
||||
if (has_iboost(dev_priv) &&
|
||||
intel_bios_encoder_dp_boost_level(encoder->devdata))
|
||||
iboost_bit = DDI_BUF_BALANCE_LEG_ENABLE;
|
||||
|
||||
for (i = 0; i < n_entries; i++) {
|
||||
intel_de_write(dev_priv, DDI_BUF_TRANS_LO(port, i),
|
||||
ddi_translations->entries[i].hsw.trans1 | iboost_bit);
|
||||
trans->entries[i].hsw.trans1 | iboost_bit);
|
||||
intel_de_write(dev_priv, DDI_BUF_TRANS_HI(port, i),
|
||||
ddi_translations->entries[i].hsw.trans2);
|
||||
trans->entries[i].hsw.trans2);
|
||||
}
|
||||
}
|
||||
|
||||
@ -128,31 +132,29 @@ void hsw_prepare_dp_ddi_buffers(struct intel_encoder *encoder,
|
||||
* HDMI/DVI use cases.
|
||||
*/
|
||||
static void hsw_prepare_hdmi_ddi_buffers(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
u32 iboost_bit = 0;
|
||||
int n_entries;
|
||||
enum port port = encoder->port;
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
/* If we're boosting the current, set bit 31 of trans1 */
|
||||
if (DISPLAY_VER(dev_priv) == 9 && !IS_BROXTON(dev_priv) &&
|
||||
if (has_iboost(dev_priv) &&
|
||||
intel_bios_encoder_hdmi_boost_level(encoder->devdata))
|
||||
iboost_bit = DDI_BUF_BALANCE_LEG_ENABLE;
|
||||
|
||||
/* Entry 9 is for HDMI: */
|
||||
intel_de_write(dev_priv, DDI_BUF_TRANS_LO(port, 9),
|
||||
ddi_translations->entries[level].hsw.trans1 | iboost_bit);
|
||||
trans->entries[level].hsw.trans1 | iboost_bit);
|
||||
intel_de_write(dev_priv, DDI_BUF_TRANS_HI(port, 9),
|
||||
ddi_translations->entries[level].hsw.trans2);
|
||||
trans->entries[level].hsw.trans2);
|
||||
}
|
||||
|
||||
void intel_wait_ddi_buf_idle(struct drm_i915_private *dev_priv,
|
||||
@ -281,13 +283,14 @@ static void intel_ddi_init_dp_buf_reg(struct intel_encoder *encoder,
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
enum phy phy = intel_port_to_phy(i915, encoder->port);
|
||||
|
||||
/* DDI_BUF_CTL_ENABLE will be set by intel_ddi_prepare_link_retrain() later */
|
||||
intel_dp->DP = dig_port->saved_port_bits |
|
||||
DDI_BUF_CTL_ENABLE | DDI_BUF_TRANS_SELECT(0);
|
||||
intel_dp->DP |= DDI_PORT_WIDTH(crtc_state->lane_count);
|
||||
DDI_PORT_WIDTH(crtc_state->lane_count) |
|
||||
DDI_BUF_TRANS_SELECT(0);
|
||||
|
||||
if (IS_ALDERLAKE_P(i915) && intel_phy_is_tc(i915, phy)) {
|
||||
intel_dp->DP |= ddi_buf_phy_link_rate(crtc_state->port_clock);
|
||||
if (dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
if (!intel_tc_port_in_tbt_alt_mode(dig_port))
|
||||
intel_dp->DP |= DDI_BUF_CTL_TC_PHY_OWNERSHIP;
|
||||
}
|
||||
}
|
||||
@ -407,6 +410,20 @@ static u32 bdw_trans_port_sync_master_select(enum transcoder master_transcoder)
|
||||
return master_transcoder + 1;
|
||||
}
|
||||
|
||||
static void
|
||||
intel_ddi_config_transcoder_dp2(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
u32 val = 0;
|
||||
|
||||
if (intel_dp_is_uhbr(crtc_state))
|
||||
val = TRANS_DP2_128B132B_CHANNEL_CODING;
|
||||
|
||||
intel_de_write(i915, TRANS_DP2_CTL(cpu_transcoder), val);
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns the TRANS_DDI_FUNC_CTL value based on CRTC state.
|
||||
*
|
||||
@ -488,10 +505,13 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
|
||||
if (crtc_state->hdmi_high_tmds_clock_ratio)
|
||||
temp |= TRANS_DDI_HIGH_TMDS_CHAR_RATE;
|
||||
} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) {
|
||||
temp |= TRANS_DDI_MODE_SELECT_FDI;
|
||||
temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
|
||||
temp |= (crtc_state->fdi_lanes - 1) << 1;
|
||||
} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) {
|
||||
temp |= TRANS_DDI_MODE_SELECT_DP_MST;
|
||||
if (intel_dp_is_uhbr(crtc_state))
|
||||
temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
|
||||
else
|
||||
temp |= TRANS_DDI_MODE_SELECT_DP_MST;
|
||||
temp |= DDI_PORT_WIDTH(crtc_state->lane_count);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 12) {
|
||||
@ -678,8 +698,13 @@ bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector)
|
||||
ret = false;
|
||||
break;
|
||||
|
||||
case TRANS_DDI_MODE_SELECT_FDI:
|
||||
ret = type == DRM_MODE_CONNECTOR_VGA;
|
||||
case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
|
||||
if (HAS_DP20(dev_priv))
|
||||
/* 128b/132b */
|
||||
ret = false;
|
||||
else
|
||||
/* FDI */
|
||||
ret = type == DRM_MODE_CONNECTOR_VGA;
|
||||
break;
|
||||
|
||||
default:
|
||||
@ -766,8 +791,9 @@ static void intel_ddi_get_encoder_pipes(struct intel_encoder *encoder,
|
||||
if ((tmp & port_mask) != ddi_select)
|
||||
continue;
|
||||
|
||||
if ((tmp & TRANS_DDI_MODE_SELECT_MASK) ==
|
||||
TRANS_DDI_MODE_SELECT_DP_MST)
|
||||
if ((tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_DP_MST ||
|
||||
(HAS_DP20(dev_priv) &&
|
||||
(tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_FDI_OR_128B132B))
|
||||
mst_pipe_mask |= BIT(p);
|
||||
|
||||
*pipe_mask |= BIT(p);
|
||||
@ -861,8 +887,7 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder,
|
||||
|
||||
dig_port = enc_to_dig_port(encoder);
|
||||
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT) {
|
||||
if (!intel_tc_port_in_tbt_alt_mode(dig_port)) {
|
||||
drm_WARN_ON(&dev_priv->drm, dig_port->ddi_io_wakeref);
|
||||
dig_port->ddi_io_wakeref = intel_display_power_get(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
@ -947,16 +972,14 @@ static void skl_ddi_set_iboost(struct intel_encoder *encoder,
|
||||
iboost = intel_bios_encoder_dp_boost_level(encoder->devdata);
|
||||
|
||||
if (iboost == 0) {
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
int n_entries;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
iboost = ddi_translations->entries[level].hsw.i_boost;
|
||||
iboost = trans->entries[level].hsw.i_boost;
|
||||
}
|
||||
|
||||
/* Make sure that the requested I_boost is valid */
|
||||
@ -971,28 +994,6 @@ static void skl_ddi_set_iboost(struct intel_encoder *encoder,
|
||||
_skl_ddi_set_iboost(dev_priv, PORT_E, iboost);
|
||||
}
|
||||
|
||||
static void bxt_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
enum port port = encoder->port;
|
||||
int n_entries;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
return;
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
bxt_ddi_phy_set_signal_level(dev_priv, port,
|
||||
ddi_translations->entries[level].bxt.margin,
|
||||
ddi_translations->entries[level].bxt.scale,
|
||||
ddi_translations->entries[level].bxt.enable,
|
||||
ddi_translations->entries[level].bxt.deemphasis);
|
||||
}
|
||||
|
||||
static u8 intel_ddi_dp_voltage_max(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
@ -1023,26 +1024,24 @@ static u8 intel_ddi_dp_preemph_max(struct intel_dp *intel_dp)
|
||||
}
|
||||
|
||||
static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
int n_entries, ln;
|
||||
u32 val;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP)) {
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
|
||||
val = EDP4K2K_MODE_OVRD_EN | EDP4K2K_MODE_OVRD_OPTIMIZED;
|
||||
intel_dp->hobl_active = is_hobl_buf_trans(ddi_translations);
|
||||
intel_dp->hobl_active = is_hobl_buf_trans(trans);
|
||||
intel_de_rmw(dev_priv, ICL_PORT_CL_DW10(phy), val,
|
||||
intel_dp->hobl_active ? val : 0);
|
||||
}
|
||||
@ -1060,8 +1059,8 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
|
||||
val = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN0(phy));
|
||||
val &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
|
||||
RCOMP_SCALAR_MASK);
|
||||
val |= SWING_SEL_UPPER(ddi_translations->entries[level].icl.dw2_swing_sel);
|
||||
val |= SWING_SEL_LOWER(ddi_translations->entries[level].icl.dw2_swing_sel);
|
||||
val |= SWING_SEL_UPPER(trans->entries[level].icl.dw2_swing_sel);
|
||||
val |= SWING_SEL_LOWER(trans->entries[level].icl.dw2_swing_sel);
|
||||
/* Program Rcomp scalar for every table entry */
|
||||
val |= RCOMP_SCALAR(0x98);
|
||||
intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), val);
|
||||
@ -1072,22 +1071,21 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
|
||||
val = intel_de_read(dev_priv, ICL_PORT_TX_DW4_LN(ln, phy));
|
||||
val &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
|
||||
CURSOR_COEFF_MASK);
|
||||
val |= POST_CURSOR_1(ddi_translations->entries[level].icl.dw4_post_cursor_1);
|
||||
val |= POST_CURSOR_2(ddi_translations->entries[level].icl.dw4_post_cursor_2);
|
||||
val |= CURSOR_COEFF(ddi_translations->entries[level].icl.dw4_cursor_coeff);
|
||||
val |= POST_CURSOR_1(trans->entries[level].icl.dw4_post_cursor_1);
|
||||
val |= POST_CURSOR_2(trans->entries[level].icl.dw4_post_cursor_2);
|
||||
val |= CURSOR_COEFF(trans->entries[level].icl.dw4_cursor_coeff);
|
||||
intel_de_write(dev_priv, ICL_PORT_TX_DW4_LN(ln, phy), val);
|
||||
}
|
||||
|
||||
/* Program PORT_TX_DW7 */
|
||||
val = intel_de_read(dev_priv, ICL_PORT_TX_DW7_LN0(phy));
|
||||
val &= ~N_SCALAR_MASK;
|
||||
val |= N_SCALAR(ddi_translations->entries[level].icl.dw7_n_scalar);
|
||||
val |= N_SCALAR(trans->entries[level].icl.dw7_n_scalar);
|
||||
intel_de_write(dev_priv, ICL_PORT_TX_DW7_GRP(phy), val);
|
||||
}
|
||||
|
||||
static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
static void icl_combo_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
@ -1138,7 +1136,7 @@ static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), val);
|
||||
|
||||
/* 5. Program swing and de-emphasis */
|
||||
icl_ddi_combo_vswing_program(encoder, crtc_state, level);
|
||||
icl_ddi_combo_vswing_program(encoder, crtc_state);
|
||||
|
||||
/* 6. Set training enable to trigger update */
|
||||
val = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN0(phy));
|
||||
@ -1146,24 +1144,22 @@ static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), val);
|
||||
}
|
||||
|
||||
static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
static void icl_mg_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, encoder->port);
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
int n_entries, ln;
|
||||
u32 val;
|
||||
|
||||
if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
|
||||
if (intel_tc_port_in_tbt_alt_mode(enc_to_dig_port(encoder)))
|
||||
return;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
/* Set MG_TX_LINK_PARAMS cri_use_fs32 to 0. */
|
||||
for (ln = 0; ln < 2; ln++) {
|
||||
@ -1181,13 +1177,13 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
val = intel_de_read(dev_priv, MG_TX1_SWINGCTRL(ln, tc_port));
|
||||
val &= ~CRI_TXDEEMPH_OVERRIDE_17_12_MASK;
|
||||
val |= CRI_TXDEEMPH_OVERRIDE_17_12(
|
||||
ddi_translations->entries[level].mg.cri_txdeemph_override_17_12);
|
||||
trans->entries[level].mg.cri_txdeemph_override_17_12);
|
||||
intel_de_write(dev_priv, MG_TX1_SWINGCTRL(ln, tc_port), val);
|
||||
|
||||
val = intel_de_read(dev_priv, MG_TX2_SWINGCTRL(ln, tc_port));
|
||||
val &= ~CRI_TXDEEMPH_OVERRIDE_17_12_MASK;
|
||||
val |= CRI_TXDEEMPH_OVERRIDE_17_12(
|
||||
ddi_translations->entries[level].mg.cri_txdeemph_override_17_12);
|
||||
trans->entries[level].mg.cri_txdeemph_override_17_12);
|
||||
intel_de_write(dev_priv, MG_TX2_SWINGCTRL(ln, tc_port), val);
|
||||
}
|
||||
|
||||
@ -1197,9 +1193,9 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
val &= ~(CRI_TXDEEMPH_OVERRIDE_11_6_MASK |
|
||||
CRI_TXDEEMPH_OVERRIDE_5_0_MASK);
|
||||
val |= CRI_TXDEEMPH_OVERRIDE_5_0(
|
||||
ddi_translations->entries[level].mg.cri_txdeemph_override_5_0) |
|
||||
trans->entries[level].mg.cri_txdeemph_override_5_0) |
|
||||
CRI_TXDEEMPH_OVERRIDE_11_6(
|
||||
ddi_translations->entries[level].mg.cri_txdeemph_override_11_6) |
|
||||
trans->entries[level].mg.cri_txdeemph_override_11_6) |
|
||||
CRI_TXDEEMPH_OVERRIDE_EN;
|
||||
intel_de_write(dev_priv, MG_TX1_DRVCTRL(ln, tc_port), val);
|
||||
|
||||
@ -1207,9 +1203,9 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
val &= ~(CRI_TXDEEMPH_OVERRIDE_11_6_MASK |
|
||||
CRI_TXDEEMPH_OVERRIDE_5_0_MASK);
|
||||
val |= CRI_TXDEEMPH_OVERRIDE_5_0(
|
||||
ddi_translations->entries[level].mg.cri_txdeemph_override_5_0) |
|
||||
trans->entries[level].mg.cri_txdeemph_override_5_0) |
|
||||
CRI_TXDEEMPH_OVERRIDE_11_6(
|
||||
ddi_translations->entries[level].mg.cri_txdeemph_override_11_6) |
|
||||
trans->entries[level].mg.cri_txdeemph_override_11_6) |
|
||||
CRI_TXDEEMPH_OVERRIDE_EN;
|
||||
intel_de_write(dev_priv, MG_TX2_DRVCTRL(ln, tc_port), val);
|
||||
|
||||
@ -1269,45 +1265,29 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
}
|
||||
}
|
||||
|
||||
static void icl_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
icl_combo_phy_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else
|
||||
icl_mg_phy_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
}
|
||||
|
||||
static void
|
||||
tgl_dkl_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
static void tgl_dkl_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, encoder->port);
|
||||
const struct intel_ddi_buf_trans *ddi_translations;
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
u32 val, dpcnt_mask, dpcnt_val;
|
||||
int n_entries, ln;
|
||||
|
||||
if (enc_to_dig_port(encoder)->tc_mode == TC_PORT_TBT_ALT)
|
||||
if (intel_tc_port_in_tbt_alt_mode(enc_to_dig_port(encoder)))
|
||||
return;
|
||||
|
||||
ddi_translations = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations))
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
dpcnt_mask = (DKL_TX_PRESHOOT_COEFF_MASK |
|
||||
DKL_TX_DE_EMPAHSIS_COEFF_MASK |
|
||||
DKL_TX_VSWING_CONTROL_MASK);
|
||||
dpcnt_val = DKL_TX_VSWING_CONTROL(ddi_translations->entries[level].dkl.dkl_vswing_control);
|
||||
dpcnt_val |= DKL_TX_DE_EMPHASIS_COEFF(ddi_translations->entries[level].dkl.dkl_de_emphasis_control);
|
||||
dpcnt_val |= DKL_TX_PRESHOOT_COEFF(ddi_translations->entries[level].dkl.dkl_preshoot_control);
|
||||
dpcnt_val = DKL_TX_VSWING_CONTROL(trans->entries[level].dkl.dkl_vswing_control);
|
||||
dpcnt_val |= DKL_TX_DE_EMPHASIS_COEFF(trans->entries[level].dkl.dkl_de_emphasis_control);
|
||||
dpcnt_val |= DKL_TX_PRESHOOT_COEFF(trans->entries[level].dkl.dkl_preshoot_control);
|
||||
|
||||
for (ln = 0; ln < 2; ln++) {
|
||||
intel_de_write(dev_priv, HIP_INDEX_REG(tc_port),
|
||||
@ -1340,19 +1320,6 @@ tgl_dkl_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
}
|
||||
}
|
||||
|
||||
static void tgl_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int level)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
icl_combo_phy_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else
|
||||
tgl_dkl_phy_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
}
|
||||
|
||||
static int translate_signal_level(struct intel_dp *intel_dp,
|
||||
u8 signal_levels)
|
||||
{
|
||||
@ -1371,65 +1338,55 @@ static int translate_signal_level(struct intel_dp *intel_dp,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_ddi_dp_level(struct intel_dp *intel_dp)
|
||||
static int intel_ddi_dp_level(struct intel_dp *intel_dp, int lane)
|
||||
{
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
u8 train_set = intel_dp->train_set[lane];
|
||||
u8 signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
|
||||
DP_TRAIN_PRE_EMPHASIS_MASK);
|
||||
|
||||
return translate_signal_level(intel_dp, signal_levels);
|
||||
}
|
||||
|
||||
static void
|
||||
dg2_set_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
int intel_ddi_level(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int lane)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
int level, n_entries;
|
||||
|
||||
intel_snps_phy_ddi_vswing_sequence(encoder, level);
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&i915->drm, !trans))
|
||||
return 0;
|
||||
|
||||
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI))
|
||||
level = intel_ddi_hdmi_level(encoder, trans);
|
||||
else
|
||||
level = intel_ddi_dp_level(enc_to_intel_dp(encoder), lane);
|
||||
|
||||
if (drm_WARN_ON_ONCE(&i915->drm, level >= n_entries))
|
||||
level = n_entries - 1;
|
||||
|
||||
return level;
|
||||
}
|
||||
|
||||
static void
|
||||
tgl_set_signal_levels(struct intel_dp *intel_dp,
|
||||
hsw_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
||||
tgl_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
}
|
||||
|
||||
static void
|
||||
icl_set_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
||||
icl_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
}
|
||||
|
||||
static void
|
||||
bxt_set_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
||||
bxt_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
}
|
||||
|
||||
static void
|
||||
hsw_set_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
enum port port = encoder->port;
|
||||
u32 signal_levels;
|
||||
|
||||
if (has_iboost(dev_priv))
|
||||
skl_ddi_set_iboost(encoder, crtc_state, level);
|
||||
|
||||
/* HDMI ignores the rest */
|
||||
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI))
|
||||
return;
|
||||
|
||||
signal_levels = DDI_BUF_TRANS_SELECT(level);
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "Using signal levels %08x\n",
|
||||
@ -1438,9 +1395,6 @@ hsw_set_signal_levels(struct intel_dp *intel_dp,
|
||||
intel_dp->DP &= ~DDI_BUF_EMP_MASK;
|
||||
intel_dp->DP |= signal_levels;
|
||||
|
||||
if (DISPLAY_VER(dev_priv) == 9 && !IS_BROXTON(dev_priv))
|
||||
skl_ddi_set_iboost(encoder, crtc_state, level);
|
||||
|
||||
intel_de_write(dev_priv, DDI_BUF_CTL(port), intel_dp->DP);
|
||||
intel_de_posting_read(dev_priv, DDI_BUF_CTL(port));
|
||||
}
|
||||
@ -2059,7 +2013,7 @@ icl_program_mg_dp_mode(struct intel_digital_port *dig_port,
|
||||
u8 width;
|
||||
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode == TC_PORT_TBT_ALT)
|
||||
intel_tc_port_in_tbt_alt_mode(dig_port))
|
||||
return;
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 12) {
|
||||
@ -2084,7 +2038,7 @@ icl_program_mg_dp_mode(struct intel_digital_port *dig_port,
|
||||
switch (pin_assignment) {
|
||||
case 0x0:
|
||||
drm_WARN_ON(&dev_priv->drm,
|
||||
dig_port->tc_mode != TC_PORT_LEGACY);
|
||||
!intel_tc_port_in_legacy_mode(dig_port));
|
||||
if (width == 1) {
|
||||
ln1 |= MG_DP_MODE_CFG_DP_X1_MODE;
|
||||
} else {
|
||||
@ -2329,14 +2283,18 @@ static void dg2_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
{
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST);
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
||||
intel_dp_set_link_params(intel_dp, crtc_state->port_clock,
|
||||
crtc_state->lane_count);
|
||||
|
||||
/*
|
||||
* We only configure what the register value will be here. Actual
|
||||
* enabling happens during link training farther down.
|
||||
*/
|
||||
intel_ddi_init_dp_buf_reg(encoder, crtc_state);
|
||||
|
||||
/*
|
||||
* 1. Enable Power Wells
|
||||
*
|
||||
@ -2353,8 +2311,7 @@ static void dg2_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
intel_ddi_enable_clock(encoder, crtc_state);
|
||||
|
||||
/* 4. Enable IO power */
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
if (!intel_tc_port_in_tbt_alt_mode(dig_port))
|
||||
dig_port->ddi_io_wakeref = intel_display_power_get(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
|
||||
@ -2374,7 +2331,8 @@ static void dg2_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
*/
|
||||
intel_ddi_enable_pipe_clock(encoder, crtc_state);
|
||||
|
||||
/* 5.b Not relevant to i915 for now */
|
||||
/* 5.b Configure transcoder for DP 2.0 128b/132b */
|
||||
intel_ddi_config_transcoder_dp2(encoder, crtc_state);
|
||||
|
||||
/*
|
||||
* 5.c Configure TRANS_DDI_FUNC_CTL DDI Select, DDI Mode Select & MST
|
||||
@ -2391,21 +2349,12 @@ static void dg2_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
*/
|
||||
|
||||
/* 5.e Configure voltage swing and related IO settings */
|
||||
intel_snps_phy_ddi_vswing_sequence(encoder, level);
|
||||
|
||||
/*
|
||||
* 5.f Configure and enable DDI_BUF_CTL
|
||||
* 5.g Wait for DDI_BUF_CTL DDI Idle Status = 0b (Not Idle), timeout
|
||||
* after 1200 us.
|
||||
*
|
||||
* We only configure what the register value will be here. Actual
|
||||
* enabling happens during link training farther down.
|
||||
*/
|
||||
intel_ddi_init_dp_buf_reg(encoder, crtc_state);
|
||||
encoder->set_signal_levels(encoder, crtc_state);
|
||||
|
||||
if (!is_mst)
|
||||
intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
|
||||
|
||||
intel_dp_configure_protocol_converter(intel_dp, crtc_state);
|
||||
intel_dp_sink_set_decompression_state(intel_dp, crtc_state, true);
|
||||
/*
|
||||
* DDI FEC: "anticipates enabling FEC encoding sets the FEC_READY bit
|
||||
@ -2413,6 +2362,8 @@ static void dg2_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
* training
|
||||
*/
|
||||
intel_dp_sink_set_fec_ready(intel_dp, crtc_state);
|
||||
intel_dp_check_frl_training(intel_dp);
|
||||
intel_dp_pcon_dsc_configure(intel_dp, crtc_state);
|
||||
|
||||
/*
|
||||
* 5.h Follow DisplayPort specification training sequence (see notes for
|
||||
@ -2439,15 +2390,19 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
{
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST);
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
||||
intel_dp_set_link_params(intel_dp,
|
||||
crtc_state->port_clock,
|
||||
crtc_state->lane_count);
|
||||
|
||||
/*
|
||||
* We only configure what the register value will be here. Actual
|
||||
* enabling happens during link training farther down.
|
||||
*/
|
||||
intel_ddi_init_dp_buf_reg(encoder, crtc_state);
|
||||
|
||||
/*
|
||||
* 1. Enable Power Wells
|
||||
*
|
||||
@ -2476,8 +2431,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
intel_ddi_enable_clock(encoder, crtc_state);
|
||||
|
||||
/* 5. If IO power is controlled through PWR_WELL_CTL, Enable IO Power */
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT) {
|
||||
if (!intel_tc_port_in_tbt_alt_mode(dig_port)) {
|
||||
drm_WARN_ON(&dev_priv->drm, dig_port->ddi_io_wakeref);
|
||||
dig_port->ddi_io_wakeref = intel_display_power_get(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
@ -2517,7 +2471,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
*/
|
||||
|
||||
/* 7.e Configure voltage swing and related IO settings */
|
||||
tgl_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
encoder->set_signal_levels(encoder, crtc_state);
|
||||
|
||||
/*
|
||||
* 7.f Combo PHY: Configure PORT_CL_DW10 Static Power Down to power up
|
||||
@ -2530,16 +2484,6 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
*/
|
||||
intel_ddi_mso_configure(crtc_state);
|
||||
|
||||
/*
|
||||
* 7.g Configure and enable DDI_BUF_CTL
|
||||
* 7.h Wait for DDI_BUF_CTL DDI Idle Status = 0b (Not Idle), timeout
|
||||
* after 500 us.
|
||||
*
|
||||
* We only configure what the register value will be here. Actual
|
||||
* enabling happens during link training farther down.
|
||||
*/
|
||||
intel_ddi_init_dp_buf_reg(encoder, crtc_state);
|
||||
|
||||
if (!is_mst)
|
||||
intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
|
||||
|
||||
@ -2582,10 +2526,8 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST);
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) < 11)
|
||||
drm_WARN_ON(&dev_priv->drm,
|
||||
@ -2597,12 +2539,17 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
crtc_state->port_clock,
|
||||
crtc_state->lane_count);
|
||||
|
||||
/*
|
||||
* We only configure what the register value will be here. Actual
|
||||
* enabling happens during link training farther down.
|
||||
*/
|
||||
intel_ddi_init_dp_buf_reg(encoder, crtc_state);
|
||||
|
||||
intel_pps_on(intel_dp);
|
||||
|
||||
intel_ddi_enable_clock(encoder, crtc_state);
|
||||
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT) {
|
||||
if (!intel_tc_port_in_tbt_alt_mode(dig_port)) {
|
||||
drm_WARN_ON(&dev_priv->drm, dig_port->ddi_io_wakeref);
|
||||
dig_port->ddi_io_wakeref = intel_display_power_get(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
@ -2610,16 +2557,13 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
|
||||
|
||||
icl_program_mg_dp_mode(dig_port, crtc_state);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 11)
|
||||
icl_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv))
|
||||
bxt_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else
|
||||
if (has_buf_trans_select(dev_priv))
|
||||
hsw_prepare_dp_ddi_buffers(encoder, crtc_state);
|
||||
|
||||
encoder->set_signal_levels(encoder, crtc_state);
|
||||
|
||||
intel_ddi_power_up_lanes(encoder, crtc_state);
|
||||
|
||||
intel_ddi_init_dp_buf_reg(encoder, crtc_state);
|
||||
if (!is_mst)
|
||||
intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
|
||||
intel_dp_configure_protocol_converter(intel_dp, crtc_state);
|
||||
@ -2772,7 +2716,6 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state,
|
||||
struct intel_dp *intel_dp = &dig_port->dp;
|
||||
bool is_mst = intel_crtc_has_type(old_crtc_state,
|
||||
INTEL_OUTPUT_DP_MST);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (!is_mst)
|
||||
intel_dp_set_infoframes(encoder, false,
|
||||
@ -2815,8 +2758,7 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state,
|
||||
intel_pps_vdd_on(intel_dp);
|
||||
intel_pps_off(intel_dp);
|
||||
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
if (!intel_tc_port_in_tbt_alt_mode(dig_port))
|
||||
intel_display_power_put(dev_priv,
|
||||
dig_port->ddi_io_power_domain,
|
||||
fetch_and_zero(&dig_port->ddi_io_wakeref));
|
||||
@ -2862,7 +2804,7 @@ static void intel_ddi_post_disable(struct intel_atomic_state *state,
|
||||
if (!intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST)) {
|
||||
intel_crtc_vblank_off(old_crtc_state);
|
||||
|
||||
intel_disable_pipe(old_crtc_state);
|
||||
intel_disable_transcoder(old_crtc_state);
|
||||
|
||||
intel_vrr_disable(old_crtc_state);
|
||||
|
||||
@ -3005,12 +2947,11 @@ static void intel_enable_ddi_dp(struct intel_atomic_state *state,
|
||||
intel_dp_stop_link_train(intel_dp, crtc_state);
|
||||
|
||||
intel_edp_backlight_on(crtc_state, conn_state);
|
||||
intel_psr_enable(intel_dp, crtc_state, conn_state);
|
||||
|
||||
if (!dig_port->lspcon.active || dig_port->dp.has_hdmi_sink)
|
||||
intel_dp_set_infoframes(encoder, true, crtc_state, conn_state);
|
||||
|
||||
intel_edp_drrs_enable(intel_dp, crtc_state);
|
||||
intel_drrs_enable(intel_dp, crtc_state);
|
||||
|
||||
if (crtc_state->has_audio)
|
||||
intel_audio_codec_enable(encoder, crtc_state, conn_state);
|
||||
@ -3046,7 +2987,6 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state,
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
struct drm_connector *connector = conn_state->connector;
|
||||
int level = intel_ddi_hdmi_level(encoder, crtc_state);
|
||||
enum port port = encoder->port;
|
||||
|
||||
if (!intel_hdmi_handle_sink_scrambling(encoder, connector,
|
||||
@ -3056,19 +2996,10 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state,
|
||||
"[CONNECTOR:%d:%s] Failed to configure sink scrambling/TMDS bit clock ratio\n",
|
||||
connector->base.id, connector->name);
|
||||
|
||||
if (IS_DG2(dev_priv))
|
||||
intel_snps_phy_ddi_vswing_sequence(encoder, U32_MAX);
|
||||
else if (DISPLAY_VER(dev_priv) >= 12)
|
||||
tgl_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else if (DISPLAY_VER(dev_priv) == 11)
|
||||
icl_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv))
|
||||
bxt_ddi_vswing_sequence(encoder, crtc_state, level);
|
||||
else
|
||||
hsw_prepare_hdmi_ddi_buffers(encoder, crtc_state, level);
|
||||
if (has_buf_trans_select(dev_priv))
|
||||
hsw_prepare_hdmi_ddi_buffers(encoder, crtc_state);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) == 9 && !IS_BROXTON(dev_priv))
|
||||
skl_ddi_set_iboost(encoder, crtc_state, level);
|
||||
encoder->set_signal_levels(encoder, crtc_state);
|
||||
|
||||
/* Display WA #1143: skl,kbl,cfl */
|
||||
if (DISPLAY_VER(dev_priv) == 9 && !IS_BROXTON(dev_priv)) {
|
||||
@ -3133,7 +3064,7 @@ static void intel_enable_ddi(struct intel_atomic_state *state,
|
||||
|
||||
intel_vrr_enable(encoder, crtc_state);
|
||||
|
||||
intel_enable_pipe(crtc_state);
|
||||
intel_enable_transcoder(crtc_state);
|
||||
|
||||
intel_crtc_vblank_on(crtc_state);
|
||||
|
||||
@ -3198,7 +3129,7 @@ static void intel_pre_disable_ddi(struct intel_atomic_state *state,
|
||||
return;
|
||||
|
||||
intel_dp = enc_to_intel_dp(encoder);
|
||||
intel_edp_drrs_disable(intel_dp, old_crtc_state);
|
||||
intel_drrs_disable(intel_dp, old_crtc_state);
|
||||
intel_psr_disable(intel_dp, old_crtc_state);
|
||||
}
|
||||
|
||||
@ -3226,11 +3157,10 @@ static void intel_ddi_update_pipe_dp(struct intel_atomic_state *state,
|
||||
|
||||
intel_ddi_set_dp_msa(crtc_state, conn_state);
|
||||
|
||||
intel_psr_update(intel_dp, crtc_state, conn_state);
|
||||
intel_dp_set_infoframes(encoder, true, crtc_state, conn_state);
|
||||
intel_edp_drrs_update(intel_dp, crtc_state);
|
||||
intel_drrs_update(intel_dp, crtc_state);
|
||||
|
||||
intel_panel_update_backlight(state, encoder, crtc_state, conn_state);
|
||||
intel_backlight_update(state, encoder, crtc_state, conn_state);
|
||||
}
|
||||
|
||||
void intel_ddi_update_pipe(struct intel_atomic_state *state,
|
||||
@ -3293,7 +3223,7 @@ intel_ddi_pre_pll_enable(struct intel_atomic_state *state,
|
||||
intel_ddi_main_link_aux_domain(dig_port));
|
||||
}
|
||||
|
||||
if (is_tc_port && dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
if (is_tc_port && !intel_tc_port_in_tbt_alt_mode(dig_port))
|
||||
/*
|
||||
* Program the lane count for static/dynamic connections on
|
||||
* Type-C ports. Skip this step for TBT.
|
||||
@ -3553,9 +3483,6 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
|
||||
pipe_config->output_types |= BIT(INTEL_OUTPUT_HDMI);
|
||||
pipe_config->lane_count = 4;
|
||||
break;
|
||||
case TRANS_DDI_MODE_SELECT_FDI:
|
||||
pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
|
||||
break;
|
||||
case TRANS_DDI_MODE_SELECT_DP_SST:
|
||||
if (encoder->type == INTEL_OUTPUT_EDP)
|
||||
pipe_config->output_types |= BIT(INTEL_OUTPUT_EDP);
|
||||
@ -3584,6 +3511,13 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
|
||||
pipe_config->infoframes.enable |=
|
||||
intel_hdmi_infoframes_enabled(encoder, pipe_config);
|
||||
break;
|
||||
case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
|
||||
if (!HAS_DP20(dev_priv)) {
|
||||
/* FDI */
|
||||
pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
|
||||
break;
|
||||
}
|
||||
fallthrough; /* 128b/132b */
|
||||
case TRANS_DDI_MODE_SELECT_DP_MST:
|
||||
pipe_config->output_types |= BIT(INTEL_OUTPUT_DP_MST);
|
||||
pipe_config->lane_count =
|
||||
@ -3807,7 +3741,13 @@ void hsw_ddi_get_config(struct intel_encoder *encoder,
|
||||
static void intel_ddi_sync_state(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
if (intel_crtc_has_dp_encoder(crtc_state))
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
enum phy phy = intel_port_to_phy(i915, encoder->port);
|
||||
|
||||
if (intel_phy_is_tc(i915, phy))
|
||||
intel_tc_port_sanitize(enc_to_dig_port(encoder));
|
||||
|
||||
if (crtc_state && intel_crtc_has_dp_encoder(crtc_state))
|
||||
intel_dp_sync_state(encoder, crtc_state);
|
||||
}
|
||||
|
||||
@ -3989,8 +3929,11 @@ static void intel_ddi_encoder_destroy(struct drm_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->dev);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(to_intel_encoder(encoder));
|
||||
enum phy phy = intel_port_to_phy(i915, dig_port->base.port);
|
||||
|
||||
intel_dp_encoder_flush_work(encoder);
|
||||
if (intel_phy_is_tc(i915, phy))
|
||||
intel_tc_port_flush_work(dig_port);
|
||||
intel_display_power_flush_work(i915);
|
||||
|
||||
drm_encoder_cleanup(encoder);
|
||||
@ -4016,7 +3959,6 @@ static const struct drm_encoder_funcs intel_ddi_funcs = {
|
||||
static struct intel_connector *
|
||||
intel_ddi_init_dp_connector(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
|
||||
struct intel_connector *connector;
|
||||
enum port port = dig_port->base.port;
|
||||
|
||||
@ -4029,17 +3971,6 @@ intel_ddi_init_dp_connector(struct intel_digital_port *dig_port)
|
||||
dig_port->dp.set_link_train = intel_ddi_set_link_train;
|
||||
dig_port->dp.set_idle_link_train = intel_ddi_set_idle_link_train;
|
||||
|
||||
if (IS_DG2(dev_priv))
|
||||
dig_port->dp.set_signal_levels = dg2_set_signal_levels;
|
||||
else if (DISPLAY_VER(dev_priv) >= 12)
|
||||
dig_port->dp.set_signal_levels = tgl_set_signal_levels;
|
||||
else if (DISPLAY_VER(dev_priv) >= 11)
|
||||
dig_port->dp.set_signal_levels = icl_set_signal_levels;
|
||||
else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv))
|
||||
dig_port->dp.set_signal_levels = bxt_set_signal_levels;
|
||||
else
|
||||
dig_port->dp.set_signal_levels = hsw_set_signal_levels;
|
||||
|
||||
dig_port->dp.voltage_max = intel_ddi_dp_voltage_max;
|
||||
dig_port->dp.preemph_max = intel_ddi_dp_preemph_max;
|
||||
|
||||
@ -4415,7 +4346,7 @@ static void intel_ddi_encoder_suspend(struct intel_encoder *encoder)
|
||||
if (!intel_phy_is_tc(i915, phy))
|
||||
return;
|
||||
|
||||
intel_tc_port_disconnect_phy(dig_port);
|
||||
intel_tc_port_flush_work(dig_port);
|
||||
}
|
||||
|
||||
static void intel_ddi_encoder_shutdown(struct intel_encoder *encoder)
|
||||
@ -4430,7 +4361,7 @@ static void intel_ddi_encoder_shutdown(struct intel_encoder *encoder)
|
||||
if (!intel_phy_is_tc(i915, phy))
|
||||
return;
|
||||
|
||||
intel_tc_port_disconnect_phy(dig_port);
|
||||
intel_tc_port_flush_work(dig_port);
|
||||
}
|
||||
|
||||
#define port_tc_name(port) ((port) - PORT_TC1 + '1')
|
||||
@ -4611,6 +4542,24 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
|
||||
encoder->get_config = hsw_ddi_get_config;
|
||||
}
|
||||
|
||||
if (IS_DG2(dev_priv)) {
|
||||
encoder->set_signal_levels = intel_snps_phy_set_signal_levels;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 12) {
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
encoder->set_signal_levels = icl_combo_phy_set_signal_levels;
|
||||
else
|
||||
encoder->set_signal_levels = tgl_dkl_phy_set_signal_levels;
|
||||
} else if (DISPLAY_VER(dev_priv) >= 11) {
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
encoder->set_signal_levels = icl_combo_phy_set_signal_levels;
|
||||
else
|
||||
encoder->set_signal_levels = icl_mg_phy_set_signal_levels;
|
||||
} else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) {
|
||||
encoder->set_signal_levels = bxt_ddi_phy_set_signal_levels;
|
||||
} else {
|
||||
encoder->set_signal_levels = hsw_set_signal_levels;
|
||||
}
|
||||
|
||||
intel_ddi_buf_trans_init(encoder);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 13)
|
||||
|
@ -59,13 +59,12 @@ void intel_ddi_set_vc_payload_alloc(const struct intel_crtc_state *crtc_state,
|
||||
bool state);
|
||||
void intel_ddi_compute_min_voltage_level(struct drm_i915_private *dev_priv,
|
||||
struct intel_crtc_state *crtc_state);
|
||||
u32 bxt_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
u32 ddi_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
int intel_ddi_toggle_hdcp_bits(struct intel_encoder *intel_encoder,
|
||||
enum transcoder cpu_transcoder,
|
||||
bool enable, u32 hdcp_mask);
|
||||
void intel_ddi_sanitize_encoder_pll_mapping(struct intel_encoder *encoder);
|
||||
int intel_ddi_level(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int lane);
|
||||
|
||||
#endif /* __INTEL_DDI_H__ */
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -45,12 +45,19 @@ struct tgl_dkl_phy_ddi_buf_trans {
|
||||
u32 dkl_de_emphasis_control;
|
||||
};
|
||||
|
||||
struct dg2_snps_phy_buf_trans {
|
||||
u8 snps_vswing;
|
||||
u8 snps_pre_cursor;
|
||||
u8 snps_post_cursor;
|
||||
};
|
||||
|
||||
union intel_ddi_buf_trans_entry {
|
||||
struct hsw_ddi_buf_trans hsw;
|
||||
struct bxt_ddi_buf_trans bxt;
|
||||
struct icl_ddi_buf_trans icl;
|
||||
struct icl_mg_phy_ddi_buf_trans mg;
|
||||
struct tgl_dkl_phy_ddi_buf_trans dkl;
|
||||
struct dg2_snps_phy_buf_trans snps;
|
||||
};
|
||||
|
||||
struct intel_ddi_buf_trans {
|
||||
@ -61,10 +68,6 @@ struct intel_ddi_buf_trans {
|
||||
|
||||
bool is_hobl_buf_trans(const struct intel_ddi_buf_trans *table);
|
||||
|
||||
int intel_ddi_hdmi_num_entries(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int *default_entry);
|
||||
|
||||
void intel_ddi_buf_trans_init(struct intel_encoder *encoder);
|
||||
|
||||
#endif
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -270,6 +270,7 @@ enum tc_port {
|
||||
};
|
||||
|
||||
enum tc_port_mode {
|
||||
TC_PORT_DISCONNECTED,
|
||||
TC_PORT_TBT_ALT,
|
||||
TC_PORT_DP_ALT,
|
||||
TC_PORT_LEGACY,
|
||||
@ -531,8 +532,8 @@ enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port);
|
||||
bool is_trans_port_sync_mode(const struct intel_crtc_state *state);
|
||||
|
||||
void intel_plane_destroy(struct drm_plane *plane);
|
||||
void intel_enable_pipe(const struct intel_crtc_state *new_crtc_state);
|
||||
void intel_disable_pipe(const struct intel_crtc_state *old_crtc_state);
|
||||
void intel_enable_transcoder(const struct intel_crtc_state *new_crtc_state);
|
||||
void intel_disable_transcoder(const struct intel_crtc_state *old_crtc_state);
|
||||
void i830_enable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe);
|
||||
void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe);
|
||||
enum pipe intel_crtc_pch_transcoder(struct intel_crtc *crtc);
|
||||
@ -548,8 +549,6 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv);
|
||||
unsigned int intel_fb_xy_to_linear(int x, int y,
|
||||
const struct intel_plane_state *state,
|
||||
int plane);
|
||||
unsigned int intel_fb_align_height(const struct drm_framebuffer *fb,
|
||||
int color_plane, unsigned int height);
|
||||
void intel_add_fb_offsets(int *x, int *y,
|
||||
const struct intel_plane_state *state, int plane);
|
||||
unsigned int intel_rotation_info_size(const struct intel_rotation_info *rot_info);
|
||||
@ -630,10 +629,6 @@ struct intel_encoder *
|
||||
intel_get_crtc_new_encoder(const struct intel_atomic_state *state,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
unsigned int intel_surf_alignment(const struct drm_framebuffer *fb,
|
||||
int color_plane);
|
||||
unsigned int intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane);
|
||||
|
||||
void intel_display_driver_register(struct drm_i915_private *i915);
|
||||
void intel_display_driver_unregister(struct drm_i915_private *i915);
|
||||
|
||||
@ -650,23 +645,10 @@ void intel_init_pch_refclk(struct drm_i915_private *dev_priv);
|
||||
int intel_modeset_all_pipes(struct intel_atomic_state *state);
|
||||
|
||||
/* modesetting asserts */
|
||||
void assert_panel_unlocked(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe);
|
||||
void assert_pll(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, bool state);
|
||||
#define assert_pll_enabled(d, p) assert_pll(d, p, true)
|
||||
#define assert_pll_disabled(d, p) assert_pll(d, p, false)
|
||||
void assert_dsi_pll(struct drm_i915_private *dev_priv, bool state);
|
||||
#define assert_dsi_pll_enabled(d) assert_dsi_pll(d, true)
|
||||
#define assert_dsi_pll_disabled(d) assert_dsi_pll(d, false)
|
||||
void assert_fdi_rx_pll(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, bool state);
|
||||
#define assert_fdi_rx_pll_enabled(d, p) assert_fdi_rx_pll(d, p, true)
|
||||
#define assert_fdi_rx_pll_disabled(d, p) assert_fdi_rx_pll(d, p, false)
|
||||
void assert_pipe(struct drm_i915_private *dev_priv,
|
||||
enum transcoder cpu_transcoder, bool state);
|
||||
#define assert_pipe_enabled(d, t) assert_pipe(d, t, true)
|
||||
#define assert_pipe_disabled(d, t) assert_pipe(d, t, false)
|
||||
void assert_transcoder(struct drm_i915_private *dev_priv,
|
||||
enum transcoder cpu_transcoder, bool state);
|
||||
#define assert_transcoder_enabled(d, t) assert_transcoder(d, t, true)
|
||||
#define assert_transcoder_disabled(d, t) assert_transcoder(d, t, false)
|
||||
|
||||
/* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and
|
||||
* WARN_ON()) for hw state sanity checks to check for unexpected conditions
|
||||
|
@ -13,6 +13,7 @@
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dmc.h"
|
||||
#include "intel_dp.h"
|
||||
#include "intel_drrs.h"
|
||||
#include "intel_fbc.h"
|
||||
#include "intel_hdcp.h"
|
||||
#include "intel_hdmi.h"
|
||||
@ -1323,9 +1324,6 @@ static int i915_drrs_status(struct seq_file *m, void *unused)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define LPSP_STATUS(COND) (COND ? seq_puts(m, "LPSP: enabled\n") : \
|
||||
seq_puts(m, "LPSP: disabled\n"))
|
||||
|
||||
static bool
|
||||
intel_lpsp_power_well_enabled(struct drm_i915_private *i915,
|
||||
enum i915_power_well_id power_well_id)
|
||||
@ -1344,32 +1342,20 @@ intel_lpsp_power_well_enabled(struct drm_i915_private *i915,
|
||||
static int i915_lpsp_status(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct drm_i915_private *i915 = node_to_i915(m->private);
|
||||
bool lpsp_enabled = false;
|
||||
|
||||
if (DISPLAY_VER(i915) >= 13) {
|
||||
LPSP_STATUS(!intel_lpsp_power_well_enabled(i915,
|
||||
SKL_DISP_PW_2));
|
||||
if (DISPLAY_VER(i915) >= 13 || IS_DISPLAY_VER(i915, 9, 10)) {
|
||||
lpsp_enabled = !intel_lpsp_power_well_enabled(i915, SKL_DISP_PW_2);
|
||||
} else if (IS_DISPLAY_VER(i915, 11, 12)) {
|
||||
lpsp_enabled = !intel_lpsp_power_well_enabled(i915, ICL_DISP_PW_3);
|
||||
} else if (IS_HASWELL(i915) || IS_BROADWELL(i915)) {
|
||||
lpsp_enabled = !intel_lpsp_power_well_enabled(i915, HSW_DISP_PW_GLOBAL);
|
||||
} else {
|
||||
seq_puts(m, "LPSP: not supported\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (DISPLAY_VER(i915)) {
|
||||
case 12:
|
||||
case 11:
|
||||
LPSP_STATUS(!intel_lpsp_power_well_enabled(i915, ICL_DISP_PW_3));
|
||||
break;
|
||||
case 10:
|
||||
case 9:
|
||||
LPSP_STATUS(!intel_lpsp_power_well_enabled(i915, SKL_DISP_PW_2));
|
||||
break;
|
||||
default:
|
||||
/*
|
||||
* Apart from HASWELL/BROADWELL other legacy platform doesn't
|
||||
* support lpsp.
|
||||
*/
|
||||
if (IS_HASWELL(i915) || IS_BROADWELL(i915))
|
||||
LPSP_STATUS(!intel_lpsp_power_well_enabled(i915, HSW_DISP_PW_GLOBAL));
|
||||
else
|
||||
seq_puts(m, "LPSP: not supported\n");
|
||||
}
|
||||
seq_printf(m, "LPSP: %s\n", enableddisabled(lpsp_enabled));
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -2044,11 +2030,9 @@ static int i915_drrs_ctl_set(void *data, u64 val)
|
||||
|
||||
intel_dp = enc_to_intel_dp(encoder);
|
||||
if (val)
|
||||
intel_edp_drrs_enable(intel_dp,
|
||||
crtc_state);
|
||||
intel_drrs_enable(intel_dp, crtc_state);
|
||||
else
|
||||
intel_edp_drrs_disable(intel_dp,
|
||||
crtc_state);
|
||||
intel_drrs_disable(intel_dp, crtc_state);
|
||||
}
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
@ -2240,14 +2224,12 @@ static int i915_psr_status_show(struct seq_file *m, void *data)
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(i915_psr_status);
|
||||
|
||||
#define LPSP_CAPABLE(COND) (COND ? seq_puts(m, "LPSP: capable\n") : \
|
||||
seq_puts(m, "LPSP: incapable\n"))
|
||||
|
||||
static int i915_lpsp_capability_show(struct seq_file *m, void *data)
|
||||
{
|
||||
struct drm_connector *connector = m->private;
|
||||
struct drm_i915_private *i915 = to_i915(connector->dev);
|
||||
struct intel_encoder *encoder;
|
||||
bool lpsp_capable = false;
|
||||
|
||||
encoder = intel_attached_encoder(to_intel_connector(connector));
|
||||
if (!encoder)
|
||||
@ -2256,35 +2238,27 @@ static int i915_lpsp_capability_show(struct seq_file *m, void *data)
|
||||
if (connector->status != connector_status_connected)
|
||||
return -ENODEV;
|
||||
|
||||
if (DISPLAY_VER(i915) >= 13) {
|
||||
LPSP_CAPABLE(encoder->port <= PORT_B);
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (DISPLAY_VER(i915)) {
|
||||
case 12:
|
||||
if (DISPLAY_VER(i915) >= 13)
|
||||
lpsp_capable = encoder->port <= PORT_B;
|
||||
else if (DISPLAY_VER(i915) >= 12)
|
||||
/*
|
||||
* Actually TGL can drive LPSP on port till DDI_C
|
||||
* but there is no physical connected DDI_C on TGL sku's,
|
||||
* even driver is not initilizing DDI_C port for gen12.
|
||||
*/
|
||||
LPSP_CAPABLE(encoder->port <= PORT_B);
|
||||
break;
|
||||
case 11:
|
||||
LPSP_CAPABLE(connector->connector_type == DRM_MODE_CONNECTOR_DSI ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_eDP);
|
||||
break;
|
||||
case 10:
|
||||
case 9:
|
||||
LPSP_CAPABLE(encoder->port == PORT_A &&
|
||||
(connector->connector_type == DRM_MODE_CONNECTOR_DSI ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort));
|
||||
break;
|
||||
default:
|
||||
if (IS_HASWELL(i915) || IS_BROADWELL(i915))
|
||||
LPSP_CAPABLE(connector->connector_type == DRM_MODE_CONNECTOR_eDP);
|
||||
}
|
||||
lpsp_capable = encoder->port <= PORT_B;
|
||||
else if (DISPLAY_VER(i915) == 11)
|
||||
lpsp_capable = (connector->connector_type == DRM_MODE_CONNECTOR_DSI ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_eDP);
|
||||
else if (IS_DISPLAY_VER(i915, 9, 10))
|
||||
lpsp_capable = (encoder->port == PORT_A &&
|
||||
(connector->connector_type == DRM_MODE_CONNECTOR_DSI ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort));
|
||||
else if (IS_HASWELL(i915) || IS_BROADWELL(i915))
|
||||
lpsp_capable = connector->connector_type == DRM_MODE_CONNECTOR_eDP;
|
||||
|
||||
seq_printf(m, "LPSP: %s\n", lpsp_capable ? "capable" : "incapable");
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -2468,17 +2442,16 @@ static const struct file_operations i915_dsc_bpp_fops = {
|
||||
*
|
||||
* Cleanup will be done by drm_connector_unregister() through a call to
|
||||
* drm_debugfs_connector_remove().
|
||||
*
|
||||
* Returns 0 on success, negative error codes on error.
|
||||
*/
|
||||
int intel_connector_debugfs_add(struct drm_connector *connector)
|
||||
void intel_connector_debugfs_add(struct intel_connector *intel_connector)
|
||||
{
|
||||
struct drm_connector *connector = &intel_connector->base;
|
||||
struct dentry *root = connector->debugfs_entry;
|
||||
struct drm_i915_private *dev_priv = to_i915(connector->dev);
|
||||
|
||||
/* The connector must have been registered beforehands. */
|
||||
if (!root)
|
||||
return -ENODEV;
|
||||
return;
|
||||
|
||||
if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
|
||||
debugfs_create_file("i915_panel_timings", S_IRUGO, root,
|
||||
@ -2511,33 +2484,23 @@ int intel_connector_debugfs_add(struct drm_connector *connector)
|
||||
connector, &i915_dsc_bpp_fops);
|
||||
}
|
||||
|
||||
/* Legacy panels doesn't lpsp on any platform */
|
||||
if ((DISPLAY_VER(dev_priv) >= 9 || IS_HASWELL(dev_priv) ||
|
||||
IS_BROADWELL(dev_priv)) &&
|
||||
(connector->connector_type == DRM_MODE_CONNECTOR_DSI ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_HDMIA ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_HDMIB))
|
||||
if (connector->connector_type == DRM_MODE_CONNECTOR_DSI ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_HDMIA ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_HDMIB)
|
||||
debugfs_create_file("i915_lpsp_capability", 0444, root,
|
||||
connector, &i915_lpsp_capability_fops);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_crtc_debugfs_add - add i915 specific crtc debugfs files
|
||||
* @crtc: pointer to a drm_crtc
|
||||
*
|
||||
* Returns 0 on success, negative error codes on error.
|
||||
*
|
||||
* Failure to add debugfs entries should generally be ignored.
|
||||
*/
|
||||
int intel_crtc_debugfs_add(struct drm_crtc *crtc)
|
||||
void intel_crtc_debugfs_add(struct drm_crtc *crtc)
|
||||
{
|
||||
if (!crtc->debugfs_entry)
|
||||
return -ENODEV;
|
||||
|
||||
crtc_updates_add(crtc);
|
||||
return 0;
|
||||
if (crtc->debugfs_entry)
|
||||
crtc_updates_add(crtc);
|
||||
}
|
||||
|
@ -6,18 +6,18 @@
|
||||
#ifndef __INTEL_DISPLAY_DEBUGFS_H__
|
||||
#define __INTEL_DISPLAY_DEBUGFS_H__
|
||||
|
||||
struct drm_connector;
|
||||
struct drm_crtc;
|
||||
struct drm_i915_private;
|
||||
struct intel_connector;
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
void intel_display_debugfs_register(struct drm_i915_private *i915);
|
||||
int intel_connector_debugfs_add(struct drm_connector *connector);
|
||||
int intel_crtc_debugfs_add(struct drm_crtc *crtc);
|
||||
void intel_connector_debugfs_add(struct intel_connector *connector);
|
||||
void intel_crtc_debugfs_add(struct drm_crtc *crtc);
|
||||
#else
|
||||
static inline void intel_display_debugfs_register(struct drm_i915_private *i915) {}
|
||||
static inline int intel_connector_debugfs_add(struct drm_connector *connector) { return 0; }
|
||||
static inline int intel_crtc_debugfs_add(struct drm_crtc *crtc) { return 0; }
|
||||
static inline void intel_connector_debugfs_add(struct intel_connector *connector) {}
|
||||
static inline void intel_crtc_debugfs_add(struct drm_crtc *crtc) {}
|
||||
#endif
|
||||
|
||||
#endif /* __INTEL_DISPLAY_DEBUGFS_H__ */
|
||||
|
@ -9,11 +9,12 @@
|
||||
#include "i915_irq.h"
|
||||
#include "intel_cdclk.h"
|
||||
#include "intel_combo_phy.h"
|
||||
#include "intel_display_power.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_power.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dmc.h"
|
||||
#include "intel_dpio_phy.h"
|
||||
#include "intel_dpll.h"
|
||||
#include "intel_hotplug.h"
|
||||
#include "intel_pm.h"
|
||||
#include "intel_pps.h"
|
||||
@ -560,7 +561,7 @@ static void icl_tc_port_assert_ref_held(struct drm_i915_private *dev_priv,
|
||||
if (drm_WARN_ON(&dev_priv->drm, !dig_port))
|
||||
return;
|
||||
|
||||
if (DISPLAY_VER(dev_priv) == 11 && dig_port->tc_legacy_port)
|
||||
if (DISPLAY_VER(dev_priv) == 11 && intel_tc_cold_requires_aux_pw(dig_port))
|
||||
return;
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm, !intel_tc_port_ref_held(dig_port));
|
||||
@ -629,7 +630,7 @@ icl_tc_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
|
||||
* exit sequence.
|
||||
*/
|
||||
timeout_expected = is_tbt || intel_tc_cold_requires_aux_pw(dig_port);
|
||||
if (DISPLAY_VER(dev_priv) == 11 && dig_port->tc_legacy_port)
|
||||
if (DISPLAY_VER(dev_priv) == 11 && intel_tc_cold_requires_aux_pw(dig_port))
|
||||
icl_tc_cold_exit(dev_priv);
|
||||
|
||||
hsw_wait_for_power_well_enable(dev_priv, power_well, timeout_expected);
|
||||
@ -1195,7 +1196,7 @@ static void gen9_disable_dc_states(struct drm_i915_private *dev_priv)
|
||||
if (!HAS_DISPLAY(dev_priv))
|
||||
return;
|
||||
|
||||
dev_priv->display.get_cdclk(dev_priv, &cdclk_config);
|
||||
intel_cdclk_get_cdclk(dev_priv, &cdclk_config);
|
||||
/* Can't read out voltage_level so can't use intel_cdclk_changed() */
|
||||
drm_WARN_ON(&dev_priv->drm,
|
||||
intel_cdclk_needs_modeset(&dev_priv->cdclk.hw,
|
||||
|
@ -410,6 +410,10 @@ void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
|
||||
for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
|
||||
intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)
|
||||
|
||||
#define with_intel_display_power_if_enabled(i915, domain, wf) \
|
||||
for ((wf) = intel_display_power_get_if_enabled((i915), (domain)); (wf); \
|
||||
intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)
|
||||
|
||||
void chv_phy_powergate_lanes(struct intel_encoder *encoder,
|
||||
bool override, unsigned int mask);
|
||||
bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
|
||||
|
@ -103,8 +103,6 @@ struct intel_fb_view {
|
||||
* in the rotated and remapped GTT view all no-CCS formats (up to 2
|
||||
* color planes) are supported.
|
||||
*
|
||||
* TODO: add support for CCS formats in the remapped GTT view.
|
||||
*
|
||||
* The view information shared by all FB color planes in the FB,
|
||||
* like dst x/y and src/dst width, is stored separately in
|
||||
* intel_plane_state.
|
||||
@ -271,6 +269,9 @@ struct intel_encoder {
|
||||
const struct intel_ddi_buf_trans *(*get_buf_trans)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int *n_entries);
|
||||
void (*set_signal_levels)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
enum hpd_pin hpd_pin;
|
||||
enum intel_display_power_domain power_domain;
|
||||
/* for communication with audio component; protected by av_mutex */
|
||||
@ -428,10 +429,6 @@ struct intel_hdcp_shim {
|
||||
int (*hdcp_2_2_capable)(struct intel_digital_port *dig_port,
|
||||
bool *capable);
|
||||
|
||||
/* Detects whether a HDCP 1.4 sink connected in MST topology */
|
||||
int (*streams_type1_capable)(struct intel_connector *connector,
|
||||
bool *capable);
|
||||
|
||||
/* Write HDCP2.2 messages */
|
||||
int (*write_2_2_msg)(struct intel_digital_port *dig_port,
|
||||
void *buf, size_t size);
|
||||
@ -1060,12 +1057,14 @@ struct intel_crtc_state {
|
||||
struct intel_link_m_n dp_m2_n2;
|
||||
bool has_drrs;
|
||||
|
||||
/* PSR is supported but might not be enabled due the lack of enabled planes */
|
||||
bool has_psr;
|
||||
bool has_psr2;
|
||||
bool enable_psr2_sel_fetch;
|
||||
bool req_psr2_sdp_prior_scanline;
|
||||
u32 dc3co_exitline;
|
||||
u16 su_y_granularity;
|
||||
struct drm_dp_vsc_sdp psr_vsc;
|
||||
|
||||
/*
|
||||
* Frequence the dpll for the port should run at. Differs from the
|
||||
@ -1529,7 +1528,6 @@ struct intel_psr {
|
||||
u32 dc3co_exitline;
|
||||
u32 dc3co_exit_delay;
|
||||
struct delayed_work dc3co_work;
|
||||
struct drm_dp_vsc_sdp vsc;
|
||||
};
|
||||
|
||||
struct intel_dp {
|
||||
@ -1606,8 +1604,6 @@ struct intel_dp {
|
||||
u8 dp_train_pat);
|
||||
void (*set_idle_link_train)(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void (*set_signal_levels)(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
u8 (*preemph_max)(struct intel_dp *intel_dp);
|
||||
u8 (*voltage_max)(struct intel_dp *intel_dp,
|
||||
@ -1667,8 +1663,11 @@ struct intel_digital_port {
|
||||
enum intel_display_power_domain ddi_io_power_domain;
|
||||
intel_wakeref_t ddi_io_wakeref;
|
||||
intel_wakeref_t aux_wakeref;
|
||||
|
||||
struct mutex tc_lock; /* protects the TypeC port mode */
|
||||
intel_wakeref_t tc_lock_wakeref;
|
||||
enum intel_display_power_domain tc_lock_power_domain;
|
||||
struct delayed_work tc_disconnect_phy_work;
|
||||
int tc_link_refcount;
|
||||
bool tc_legacy_port:1;
|
||||
char tc_port_name[8];
|
||||
@ -1684,6 +1683,8 @@ struct intel_digital_port {
|
||||
bool hdcp_auth_status;
|
||||
/* HDCP port data need to pass to security f/w */
|
||||
struct hdcp_port_data hdcp_port_data;
|
||||
/* Whether the MST topology supports HDCP Type 1 Content */
|
||||
bool hdcp_mst_type1_capable;
|
||||
|
||||
void (*write_infoframe)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
@ -2035,28 +2036,6 @@ to_intel_frontbuffer(struct drm_framebuffer *fb)
|
||||
return fb ? to_intel_framebuffer(fb)->frontbuffer : NULL;
|
||||
}
|
||||
|
||||
static inline bool intel_panel_use_ssc(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (dev_priv->params.panel_use_ssc >= 0)
|
||||
return dev_priv->params.panel_use_ssc != 0;
|
||||
return dev_priv->vbt.lvds_use_ssc
|
||||
&& !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE);
|
||||
}
|
||||
|
||||
static inline u32 i9xx_dpll_compute_fp(struct dpll *dpll)
|
||||
{
|
||||
return dpll->n << 16 | dpll->m1 << 8 | dpll->m2;
|
||||
}
|
||||
|
||||
static inline u32 intel_fdi_link_freq(struct drm_i915_private *dev_priv,
|
||||
const struct intel_crtc_state *pipe_config)
|
||||
{
|
||||
if (HAS_DDI(dev_priv))
|
||||
return pipe_config->port_clock; /* SPLL */
|
||||
else
|
||||
return dev_priv->fdi_pll_freq;
|
||||
}
|
||||
|
||||
static inline bool is_ccs_modifier(u64 modifier)
|
||||
{
|
||||
return modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
|
||||
|
@ -45,8 +45,8 @@
|
||||
|
||||
#define GEN12_DMC_MAX_FW_SIZE ICL_DMC_MAX_FW_SIZE
|
||||
|
||||
#define ADLP_DMC_PATH DMC_PATH(adlp, 2, 10)
|
||||
#define ADLP_DMC_VERSION_REQUIRED DMC_VERSION(2, 10)
|
||||
#define ADLP_DMC_PATH DMC_PATH(adlp, 2, 12)
|
||||
#define ADLP_DMC_VERSION_REQUIRED DMC_VERSION(2, 12)
|
||||
MODULE_FIRMWARE(ADLP_DMC_PATH);
|
||||
|
||||
#define ADLS_DMC_PATH DMC_PATH(adls, 2, 01)
|
||||
@ -255,20 +255,10 @@ intel_get_stepping_info(struct drm_i915_private *i915,
|
||||
|
||||
static void gen9_set_dc_state_debugmask(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
u32 val, mask;
|
||||
|
||||
mask = DC_STATE_DEBUG_MASK_MEMORY_UP;
|
||||
|
||||
if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv))
|
||||
mask |= DC_STATE_DEBUG_MASK_CORES;
|
||||
|
||||
/* The below bit doesn't need to be cleared ever afterwards */
|
||||
val = intel_de_read(dev_priv, DC_STATE_DEBUG);
|
||||
if ((val & mask) != mask) {
|
||||
val |= mask;
|
||||
intel_de_write(dev_priv, DC_STATE_DEBUG, val);
|
||||
intel_de_posting_read(dev_priv, DC_STATE_DEBUG);
|
||||
}
|
||||
intel_de_rmw(dev_priv, DC_STATE_DEBUG, 0,
|
||||
DC_STATE_DEBUG_MASK_CORES | DC_STATE_DEBUG_MASK_MEMORY_UP);
|
||||
intel_de_posting_read(dev_priv, DC_STATE_DEBUG);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -805,11 +795,14 @@ void intel_dmc_ucode_resume(struct drm_i915_private *dev_priv)
|
||||
*/
|
||||
void intel_dmc_ucode_fini(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
int id;
|
||||
|
||||
if (!HAS_DMC(dev_priv))
|
||||
return;
|
||||
|
||||
intel_dmc_ucode_suspend(dev_priv);
|
||||
drm_WARN_ON(&dev_priv->drm, dev_priv->dmc.wakeref);
|
||||
|
||||
kfree(dev_priv->dmc.dmc_info[DMC_FW_MAIN].payload);
|
||||
for (id = 0; id < DMC_FW_MAX; id++)
|
||||
kfree(dev_priv->dmc.dmc_info[id].payload);
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -26,7 +26,7 @@ struct intel_dp;
|
||||
struct intel_encoder;
|
||||
|
||||
struct link_config_limits {
|
||||
int min_clock, max_clock;
|
||||
int min_rate, max_rate;
|
||||
int min_lane_count, max_lane_count;
|
||||
int min_bpp, max_bpp;
|
||||
};
|
||||
@ -58,6 +58,7 @@ int intel_dp_compute_config(struct intel_encoder *encoder,
|
||||
struct intel_crtc_state *pipe_config,
|
||||
struct drm_connector_state *conn_state);
|
||||
bool intel_dp_is_edp(struct intel_dp *intel_dp);
|
||||
bool intel_dp_is_uhbr(const struct intel_crtc_state *crtc_state);
|
||||
bool intel_dp_is_port_edp(struct drm_i915_private *dev_priv, enum port port);
|
||||
enum irqreturn intel_dp_hpd_pulse(struct intel_digital_port *dig_port,
|
||||
bool long_hpd);
|
||||
@ -70,25 +71,14 @@ int intel_dp_max_link_rate(struct intel_dp *intel_dp);
|
||||
int intel_dp_max_lane_count(struct intel_dp *intel_dp);
|
||||
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
|
||||
|
||||
void intel_edp_drrs_enable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_edp_drrs_disable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_edp_drrs_update(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_edp_drrs_invalidate(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits);
|
||||
void intel_edp_drrs_flush(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits);
|
||||
|
||||
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
|
||||
u8 *link_bw, u8 *rate_select);
|
||||
bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
|
||||
bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp);
|
||||
bool intel_dp_source_supports_tps3(struct drm_i915_private *i915);
|
||||
bool intel_dp_source_supports_tps4(struct drm_i915_private *i915);
|
||||
|
||||
bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
|
||||
int intel_dp_link_required(int pixel_clock, int bpp);
|
||||
int intel_dp_max_data_rate(int max_link_clock, int max_lanes);
|
||||
int intel_dp_max_data_rate(int max_link_rate, int max_lanes);
|
||||
bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp);
|
||||
bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
@ -98,7 +88,7 @@ void intel_dp_compute_psr_vsc_sdp(struct intel_dp *intel_dp,
|
||||
struct drm_dp_vsc_sdp *vsc);
|
||||
void intel_write_dp_vsc_sdp(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
struct drm_dp_vsc_sdp *vsc);
|
||||
const struct drm_dp_vsc_sdp *vsc);
|
||||
void intel_dp_set_infoframes(struct intel_encoder *encoder, bool enable,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
|
@ -150,9 +150,6 @@ static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp,
|
||||
u32 unused)
|
||||
{
|
||||
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
||||
struct drm_i915_private *i915 =
|
||||
to_i915(dig_port->base.base.dev);
|
||||
enum phy phy = intel_port_to_phy(i915, dig_port->base.port);
|
||||
u32 ret;
|
||||
|
||||
/*
|
||||
@ -170,8 +167,7 @@ static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp,
|
||||
DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) |
|
||||
DP_AUX_CH_CTL_SYNC_PULSE_SKL(32);
|
||||
|
||||
if (intel_phy_is_tc(i915, phy) &&
|
||||
dig_port->tc_mode == TC_PORT_TBT_ALT)
|
||||
if (intel_tc_port_in_tbt_alt_mode(dig_port))
|
||||
ret |= DP_AUX_CH_CTL_TBT_IO;
|
||||
|
||||
return ret;
|
||||
|
@ -34,9 +34,9 @@
|
||||
* for some reason.
|
||||
*/
|
||||
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dp_aux_backlight.h"
|
||||
#include "intel_panel.h"
|
||||
|
||||
/* TODO:
|
||||
* Implement HDR, right now we just implement the bare minimum to bring us back into SDR mode so we
|
||||
@ -146,7 +146,7 @@ intel_dp_aux_hdr_get_backlight(struct intel_connector *connector, enum pipe pipe
|
||||
if (!panel->backlight.edp.intel.sdr_uses_aux) {
|
||||
u32 pwm_level = panel->backlight.pwm_funcs->get(connector, pipe);
|
||||
|
||||
return intel_panel_backlight_level_from_pwm(connector, pwm_level);
|
||||
return intel_backlight_level_from_pwm(connector, pwm_level);
|
||||
}
|
||||
|
||||
/* Assume 100% brightness if backlight controls aren't enabled yet */
|
||||
@ -187,9 +187,9 @@ intel_dp_aux_hdr_set_backlight(const struct drm_connector_state *conn_state, u32
|
||||
if (panel->backlight.edp.intel.sdr_uses_aux) {
|
||||
intel_dp_aux_hdr_set_aux_backlight(conn_state, level);
|
||||
} else {
|
||||
const u32 pwm_level = intel_panel_backlight_level_to_pwm(connector, level);
|
||||
const u32 pwm_level = intel_backlight_level_to_pwm(connector, level);
|
||||
|
||||
intel_panel_set_pwm_level(conn_state, pwm_level);
|
||||
intel_backlight_set_pwm_level(conn_state, pwm_level);
|
||||
}
|
||||
}
|
||||
|
||||
@ -215,7 +215,7 @@ intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
|
||||
ctrl |= INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE;
|
||||
intel_dp_aux_hdr_set_aux_backlight(conn_state, level);
|
||||
} else {
|
||||
u32 pwm_level = intel_panel_backlight_level_to_pwm(connector, level);
|
||||
u32 pwm_level = intel_backlight_level_to_pwm(connector, level);
|
||||
|
||||
panel->backlight.pwm_funcs->enable(crtc_state, conn_state, pwm_level);
|
||||
|
||||
@ -238,7 +238,7 @@ intel_dp_aux_hdr_disable_backlight(const struct drm_connector_state *conn_state,
|
||||
return;
|
||||
|
||||
/* Note we want the actual pwm_level to be 0, regardless of pwm_min */
|
||||
panel->backlight.pwm_funcs->disable(conn_state, intel_panel_invert_pwm_level(connector, 0));
|
||||
panel->backlight.pwm_funcs->disable(conn_state, intel_backlight_invert_pwm_level(connector, 0));
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -446,8 +446,6 @@ static
|
||||
int intel_dp_hdcp2_write_msg(struct intel_digital_port *dig_port,
|
||||
void *buf, size_t size)
|
||||
{
|
||||
struct intel_dp *dp = &dig_port->dp;
|
||||
struct intel_hdcp *hdcp = &dp->attached_connector->hdcp;
|
||||
unsigned int offset;
|
||||
u8 *byte = buf;
|
||||
ssize_t ret, bytes_to_write, len;
|
||||
@ -463,8 +461,6 @@ int intel_dp_hdcp2_write_msg(struct intel_digital_port *dig_port,
|
||||
bytes_to_write = size - 1;
|
||||
byte++;
|
||||
|
||||
hdcp->cp_irq_count_cached = atomic_read(&hdcp->cp_irq_count);
|
||||
|
||||
while (bytes_to_write) {
|
||||
len = bytes_to_write > DP_AUX_MAX_PAYLOAD_BYTES ?
|
||||
DP_AUX_MAX_PAYLOAD_BYTES : bytes_to_write;
|
||||
@ -482,29 +478,11 @@ int intel_dp_hdcp2_write_msg(struct intel_digital_port *dig_port,
|
||||
return size;
|
||||
}
|
||||
|
||||
static int
|
||||
get_rxinfo_hdcp_1_dev_downstream(struct intel_digital_port *dig_port, bool *hdcp_1_x)
|
||||
{
|
||||
u8 rx_info[HDCP_2_2_RXINFO_LEN];
|
||||
int ret;
|
||||
|
||||
ret = drm_dp_dpcd_read(&dig_port->dp.aux,
|
||||
DP_HDCP_2_2_REG_RXINFO_OFFSET,
|
||||
(void *)rx_info, HDCP_2_2_RXINFO_LEN);
|
||||
|
||||
if (ret != HDCP_2_2_RXINFO_LEN)
|
||||
return ret >= 0 ? -EIO : ret;
|
||||
|
||||
*hdcp_1_x = HDCP_2_2_HDCP1_DEVICE_CONNECTED(rx_info[1]) ? true : false;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static
|
||||
ssize_t get_receiver_id_list_size(struct intel_digital_port *dig_port)
|
||||
ssize_t get_receiver_id_list_rx_info(struct intel_digital_port *dig_port, u32 *dev_cnt, u8 *byte)
|
||||
{
|
||||
u8 rx_info[HDCP_2_2_RXINFO_LEN];
|
||||
u32 dev_cnt;
|
||||
ssize_t ret;
|
||||
u8 *rx_info = byte;
|
||||
|
||||
ret = drm_dp_dpcd_read(&dig_port->dp.aux,
|
||||
DP_HDCP_2_2_REG_RXINFO_OFFSET,
|
||||
@ -512,15 +490,11 @@ ssize_t get_receiver_id_list_size(struct intel_digital_port *dig_port)
|
||||
if (ret != HDCP_2_2_RXINFO_LEN)
|
||||
return ret >= 0 ? -EIO : ret;
|
||||
|
||||
dev_cnt = (HDCP_2_2_DEV_COUNT_HI(rx_info[0]) << 4 |
|
||||
*dev_cnt = (HDCP_2_2_DEV_COUNT_HI(rx_info[0]) << 4 |
|
||||
HDCP_2_2_DEV_COUNT_LO(rx_info[1]));
|
||||
|
||||
if (dev_cnt > HDCP_2_2_MAX_DEVICE_COUNT)
|
||||
dev_cnt = HDCP_2_2_MAX_DEVICE_COUNT;
|
||||
|
||||
ret = sizeof(struct hdcp2_rep_send_receiverid_list) -
|
||||
HDCP_2_2_RECEIVER_IDS_MAX_LEN +
|
||||
(dev_cnt * HDCP_2_2_RECEIVER_ID_LEN);
|
||||
if (*dev_cnt > HDCP_2_2_MAX_DEVICE_COUNT)
|
||||
*dev_cnt = HDCP_2_2_MAX_DEVICE_COUNT;
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -530,12 +504,15 @@ int intel_dp_hdcp2_read_msg(struct intel_digital_port *dig_port,
|
||||
u8 msg_id, void *buf, size_t size)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
struct intel_dp *dp = &dig_port->dp;
|
||||
struct intel_hdcp *hdcp = &dp->attached_connector->hdcp;
|
||||
unsigned int offset;
|
||||
u8 *byte = buf;
|
||||
ssize_t ret, bytes_to_recv, len;
|
||||
const struct hdcp2_dp_msg_data *hdcp2_msg_data;
|
||||
ktime_t msg_end = ktime_set(0, 0);
|
||||
bool msg_expired;
|
||||
u32 dev_cnt;
|
||||
|
||||
hdcp2_msg_data = get_hdcp2_dp_msg_data(msg_id);
|
||||
if (!hdcp2_msg_data)
|
||||
@ -546,18 +523,25 @@ int intel_dp_hdcp2_read_msg(struct intel_digital_port *dig_port,
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (msg_id == HDCP_2_2_REP_SEND_RECVID_LIST) {
|
||||
ret = get_receiver_id_list_size(dig_port);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
size = ret;
|
||||
}
|
||||
bytes_to_recv = size - 1;
|
||||
hdcp->cp_irq_count_cached = atomic_read(&hdcp->cp_irq_count);
|
||||
|
||||
/* DP adaptation msgs has no msg_id */
|
||||
byte++;
|
||||
|
||||
if (msg_id == HDCP_2_2_REP_SEND_RECVID_LIST) {
|
||||
ret = get_receiver_id_list_rx_info(dig_port, &dev_cnt, byte);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
byte += ret;
|
||||
size = sizeof(struct hdcp2_rep_send_receiverid_list) -
|
||||
HDCP_2_2_RXINFO_LEN - HDCP_2_2_RECEIVER_IDS_MAX_LEN +
|
||||
(dev_cnt * HDCP_2_2_RECEIVER_ID_LEN);
|
||||
offset += HDCP_2_2_RXINFO_LEN;
|
||||
}
|
||||
|
||||
bytes_to_recv = size - 1;
|
||||
|
||||
while (bytes_to_recv) {
|
||||
len = bytes_to_recv > DP_AUX_MAX_PAYLOAD_BYTES ?
|
||||
DP_AUX_MAX_PAYLOAD_BYTES : bytes_to_recv;
|
||||
@ -664,27 +648,6 @@ int intel_dp_hdcp2_capable(struct intel_digital_port *dig_port,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static
|
||||
int intel_dp_mst_streams_type1_capable(struct intel_connector *connector,
|
||||
bool *capable)
|
||||
{
|
||||
struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
int ret;
|
||||
bool hdcp_1_x;
|
||||
|
||||
ret = get_rxinfo_hdcp_1_dev_downstream(dig_port, &hdcp_1_x);
|
||||
if (ret) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[%s:%d] failed to read RxInfo ret=%d\n",
|
||||
connector->base.name, connector->base.base.id, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
*capable = !hdcp_1_x;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct intel_hdcp_shim intel_dp_hdcp_shim = {
|
||||
.write_an_aksv = intel_dp_hdcp_write_an_aksv,
|
||||
.read_bksv = intel_dp_hdcp_read_bksv,
|
||||
@ -833,7 +796,6 @@ static const struct intel_hdcp_shim intel_dp_mst_hdcp_shim = {
|
||||
.stream_2_2_encryption = intel_dp_mst_hdcp2_stream_encryption,
|
||||
.check_2_2_link = intel_dp_mst_hdcp2_check_link,
|
||||
.hdcp_2_2_capable = intel_dp_hdcp2_capable,
|
||||
.streams_type1_capable = intel_dp_mst_streams_type1_capable,
|
||||
.protocol = HDCP_PROTOCOL_DP,
|
||||
};
|
||||
|
||||
|
@ -301,21 +301,33 @@ static u8 intel_dp_phy_preemph_max(struct intel_dp *intel_dp,
|
||||
return preemph_max;
|
||||
}
|
||||
|
||||
void
|
||||
intel_dp_get_adjust_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
enum drm_dp_phy dp_phy,
|
||||
const u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
static bool has_per_lane_signal_levels(struct intel_dp *intel_dp,
|
||||
enum drm_dp_phy dp_phy)
|
||||
{
|
||||
return !intel_dp_phy_is_downstream_of_source(intel_dp, dp_phy);
|
||||
}
|
||||
|
||||
static u8 intel_dp_get_lane_adjust_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
enum drm_dp_phy dp_phy,
|
||||
const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
int lane)
|
||||
{
|
||||
u8 v = 0;
|
||||
u8 p = 0;
|
||||
int lane;
|
||||
u8 voltage_max;
|
||||
u8 preemph_max;
|
||||
|
||||
for (lane = 0; lane < crtc_state->lane_count; lane++) {
|
||||
v = max(v, drm_dp_get_adjust_request_voltage(link_status, lane));
|
||||
p = max(p, drm_dp_get_adjust_request_pre_emphasis(link_status, lane));
|
||||
if (has_per_lane_signal_levels(intel_dp, dp_phy)) {
|
||||
lane = min(lane, crtc_state->lane_count - 1);
|
||||
|
||||
v = drm_dp_get_adjust_request_voltage(link_status, lane);
|
||||
p = drm_dp_get_adjust_request_pre_emphasis(link_status, lane);
|
||||
} else {
|
||||
for (lane = 0; lane < crtc_state->lane_count; lane++) {
|
||||
v = max(v, drm_dp_get_adjust_request_voltage(link_status, lane));
|
||||
p = max(p, drm_dp_get_adjust_request_pre_emphasis(link_status, lane));
|
||||
}
|
||||
}
|
||||
|
||||
preemph_max = intel_dp_phy_preemph_max(intel_dp, dp_phy);
|
||||
@ -328,8 +340,21 @@ intel_dp_get_adjust_train(struct intel_dp *intel_dp,
|
||||
if (v >= voltage_max)
|
||||
v = voltage_max | DP_TRAIN_MAX_SWING_REACHED;
|
||||
|
||||
return v | p;
|
||||
}
|
||||
|
||||
void
|
||||
intel_dp_get_adjust_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
enum drm_dp_phy dp_phy,
|
||||
const u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
int lane;
|
||||
|
||||
for (lane = 0; lane < 4; lane++)
|
||||
intel_dp->train_set[lane] = v | p;
|
||||
intel_dp->train_set[lane] =
|
||||
intel_dp_get_lane_adjust_train(intel_dp, crtc_state,
|
||||
dp_phy, link_status, lane);
|
||||
}
|
||||
|
||||
static int intel_dp_training_pattern_set_reg(struct intel_dp *intel_dp,
|
||||
@ -394,25 +419,43 @@ intel_dp_program_link_training_pattern(struct intel_dp *intel_dp,
|
||||
intel_dp->set_link_train(intel_dp, crtc_state, dp_train_pat);
|
||||
}
|
||||
|
||||
#define TRAIN_SET_FMT "%d%s/%d%s/%d%s/%d%s"
|
||||
#define _TRAIN_SET_VSWING_ARGS(train_set) \
|
||||
((train_set) & DP_TRAIN_VOLTAGE_SWING_MASK) >> DP_TRAIN_VOLTAGE_SWING_SHIFT, \
|
||||
(train_set) & DP_TRAIN_MAX_SWING_REACHED ? "(max)" : ""
|
||||
#define TRAIN_SET_VSWING_ARGS(train_set) \
|
||||
_TRAIN_SET_VSWING_ARGS((train_set)[0]), \
|
||||
_TRAIN_SET_VSWING_ARGS((train_set)[1]), \
|
||||
_TRAIN_SET_VSWING_ARGS((train_set)[2]), \
|
||||
_TRAIN_SET_VSWING_ARGS((train_set)[3])
|
||||
#define _TRAIN_SET_PREEMPH_ARGS(train_set) \
|
||||
((train_set) & DP_TRAIN_PRE_EMPHASIS_MASK) >> DP_TRAIN_PRE_EMPHASIS_SHIFT, \
|
||||
(train_set) & DP_TRAIN_MAX_PRE_EMPHASIS_REACHED ? "(max)" : ""
|
||||
#define TRAIN_SET_PREEMPH_ARGS(train_set) \
|
||||
_TRAIN_SET_PREEMPH_ARGS((train_set)[0]), \
|
||||
_TRAIN_SET_PREEMPH_ARGS((train_set)[1]), \
|
||||
_TRAIN_SET_PREEMPH_ARGS((train_set)[2]), \
|
||||
_TRAIN_SET_PREEMPH_ARGS((train_set)[3])
|
||||
|
||||
void intel_dp_set_signal_levels(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
enum drm_dp_phy dp_phy)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
u8 train_set = intel_dp->train_set[0];
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
char phy_name[10];
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "Using vswing level %d%s, pre-emphasis level %d%s, at %s\n",
|
||||
train_set & DP_TRAIN_VOLTAGE_SWING_MASK,
|
||||
train_set & DP_TRAIN_MAX_SWING_REACHED ? " (max)" : "",
|
||||
(train_set & DP_TRAIN_PRE_EMPHASIS_MASK) >>
|
||||
DP_TRAIN_PRE_EMPHASIS_SHIFT,
|
||||
train_set & DP_TRAIN_MAX_PRE_EMPHASIS_REACHED ?
|
||||
" (max)" : "",
|
||||
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] lanes: %d, "
|
||||
"vswing levels: " TRAIN_SET_FMT ", "
|
||||
"pre-emphasis levels: " TRAIN_SET_FMT ", at %s\n",
|
||||
encoder->base.base.id, encoder->base.name,
|
||||
crtc_state->lane_count,
|
||||
TRAIN_SET_VSWING_ARGS(intel_dp->train_set),
|
||||
TRAIN_SET_PREEMPH_ARGS(intel_dp->train_set),
|
||||
intel_dp_phy_name(dp_phy, phy_name, sizeof(phy_name)));
|
||||
|
||||
if (intel_dp_phy_is_downstream_of_source(intel_dp, dp_phy))
|
||||
intel_dp->set_signal_levels(intel_dp, crtc_state);
|
||||
encoder->set_signal_levels(encoder, crtc_state);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -495,11 +538,10 @@ intel_dp_prepare_link_train(struct intel_dp *intel_dp,
|
||||
&rate_select, 1);
|
||||
|
||||
link_config[0] = crtc_state->vrr.enable ? DP_MSA_TIMING_PAR_IGNORE_EN : 0;
|
||||
link_config[1] = DP_SET_ANSI_8B10B;
|
||||
link_config[1] = intel_dp_is_uhbr(crtc_state) ?
|
||||
DP_SET_ANSI_128B132B : DP_SET_ANSI_8B10B;
|
||||
drm_dp_dpcd_write(&intel_dp->aux, DP_DOWNSPREAD_CTRL, link_config, 2);
|
||||
|
||||
intel_dp->DP |= DP_PORT_EN;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -512,6 +554,25 @@ static void intel_dp_link_training_clock_recovery_delay(struct intel_dp *intel_d
|
||||
drm_dp_lttpr_link_train_clock_recovery_delay();
|
||||
}
|
||||
|
||||
static bool intel_dp_adjust_request_changed(int lane_count,
|
||||
const u8 old_link_status[DP_LINK_STATUS_SIZE],
|
||||
const u8 new_link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
int lane;
|
||||
|
||||
for (lane = 0; lane < lane_count; lane++) {
|
||||
u8 old = drm_dp_get_adjust_request_voltage(old_link_status, lane) |
|
||||
drm_dp_get_adjust_request_pre_emphasis(old_link_status, lane);
|
||||
u8 new = drm_dp_get_adjust_request_voltage(new_link_status, lane) |
|
||||
drm_dp_get_adjust_request_pre_emphasis(new_link_status, lane);
|
||||
|
||||
if (old != new)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Perform the link training clock recovery phase on the given DP PHY using
|
||||
* training pattern 1.
|
||||
@ -522,7 +583,7 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
|
||||
enum drm_dp_phy dp_phy)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
u8 voltage;
|
||||
u8 old_link_status[DP_LINK_STATUS_SIZE] = {};
|
||||
int voltage_tries, cr_tries, max_cr_tries;
|
||||
bool max_vswing_reached = false;
|
||||
|
||||
@ -575,8 +636,6 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
|
||||
return false;
|
||||
}
|
||||
|
||||
voltage = intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK;
|
||||
|
||||
/* Update training set as requested by target */
|
||||
intel_dp_get_adjust_train(intel_dp, crtc_state, dp_phy,
|
||||
link_status);
|
||||
@ -586,12 +645,14 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
|
||||
return false;
|
||||
}
|
||||
|
||||
if ((intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK) ==
|
||||
voltage)
|
||||
if (!intel_dp_adjust_request_changed(crtc_state->lane_count,
|
||||
old_link_status, link_status))
|
||||
++voltage_tries;
|
||||
else
|
||||
voltage_tries = 1;
|
||||
|
||||
memcpy(old_link_status, link_status, sizeof(link_status));
|
||||
|
||||
if (intel_dp_link_max_vswing_reached(intel_dp, crtc_state))
|
||||
max_vswing_reached = true;
|
||||
|
||||
@ -602,52 +663,56 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
|
||||
}
|
||||
|
||||
/*
|
||||
* Pick training pattern for channel equalization. Training pattern 4 for HBR3
|
||||
* or for 1.4 devices that support it, training Pattern 3 for HBR2
|
||||
* or 1.2 devices that support it, Training Pattern 2 otherwise.
|
||||
* Pick Training Pattern Sequence (TPS) for channel equalization. 128b/132b TPS2
|
||||
* for UHBR+, TPS4 for HBR3 or for 1.4 devices that support it, TPS3 for HBR2 or
|
||||
* 1.2 devices that support it, TPS2 otherwise.
|
||||
*/
|
||||
static u32 intel_dp_training_pattern(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
enum drm_dp_phy dp_phy)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
bool source_tps3, sink_tps3, source_tps4, sink_tps4;
|
||||
|
||||
/* UHBR+ use separate 128b/132b TPS2 */
|
||||
if (intel_dp_is_uhbr(crtc_state))
|
||||
return DP_TRAINING_PATTERN_2;
|
||||
|
||||
/*
|
||||
* Intel platforms that support HBR3 also support TPS4. It is mandatory
|
||||
* for all downstream devices that support HBR3. There are no known eDP
|
||||
* panels that support TPS4 as of Feb 2018 as per VESA eDP_v1.4b_E1
|
||||
* specification.
|
||||
* TPS4 support is mandatory for all downstream devices that
|
||||
* support HBR3. There are no known eDP panels that support
|
||||
* TPS4 as of Feb 2018 as per VESA eDP_v1.4b_E1 specification.
|
||||
* LTTPRs must support TPS4.
|
||||
*/
|
||||
source_tps4 = intel_dp_source_supports_hbr3(intel_dp);
|
||||
source_tps4 = intel_dp_source_supports_tps4(i915);
|
||||
sink_tps4 = dp_phy != DP_PHY_DPRX ||
|
||||
drm_dp_tps4_supported(intel_dp->dpcd);
|
||||
if (source_tps4 && sink_tps4) {
|
||||
return DP_TRAINING_PATTERN_4;
|
||||
} else if (crtc_state->port_clock == 810000) {
|
||||
if (!source_tps4)
|
||||
drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
|
||||
"8.1 Gbps link rate without source HBR3/TPS4 support\n");
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"8.1 Gbps link rate without source TPS4 support\n");
|
||||
if (!sink_tps4)
|
||||
drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"8.1 Gbps link rate without sink TPS4 support\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* Intel platforms that support HBR2 also support TPS3. TPS3 support is
|
||||
* also mandatory for downstream devices that support HBR2. However, not
|
||||
* all sinks follow the spec.
|
||||
* TPS3 support is mandatory for downstream devices that
|
||||
* support HBR2. However, not all sinks follow the spec.
|
||||
*/
|
||||
source_tps3 = intel_dp_source_supports_hbr2(intel_dp);
|
||||
source_tps3 = intel_dp_source_supports_tps3(i915);
|
||||
sink_tps3 = dp_phy != DP_PHY_DPRX ||
|
||||
drm_dp_tps3_supported(intel_dp->dpcd);
|
||||
if (source_tps3 && sink_tps3) {
|
||||
return DP_TRAINING_PATTERN_3;
|
||||
} else if (crtc_state->port_clock >= 540000) {
|
||||
if (!source_tps3)
|
||||
drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
|
||||
">=5.4/6.48 Gbps link rate without source HBR2/TPS3 support\n");
|
||||
drm_dbg_kms(&i915->drm,
|
||||
">=5.4/6.48 Gbps link rate without source TPS3 support\n");
|
||||
if (!sink_tps3)
|
||||
drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
|
||||
drm_dbg_kms(&i915->drm,
|
||||
">=5.4/6.48 Gbps link rate without sink TPS3 support\n");
|
||||
}
|
||||
|
||||
|
@ -61,7 +61,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
|
||||
int bpp, slots = -EINVAL;
|
||||
|
||||
crtc_state->lane_count = limits->max_lane_count;
|
||||
crtc_state->port_clock = limits->max_clock;
|
||||
crtc_state->port_clock = limits->max_rate;
|
||||
|
||||
for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
|
||||
crtc_state->pipe_bpp = bpp;
|
||||
@ -131,8 +131,8 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
|
||||
* for MST we always configure max link bw - the spec doesn't
|
||||
* seem to suggest we should do otherwise.
|
||||
*/
|
||||
limits.min_clock =
|
||||
limits.max_clock = intel_dp_max_link_rate(intel_dp);
|
||||
limits.min_rate =
|
||||
limits.max_rate = intel_dp_max_link_rate(intel_dp);
|
||||
|
||||
limits.min_lane_count =
|
||||
limits.max_lane_count = intel_dp_max_lane_count(intel_dp);
|
||||
@ -396,7 +396,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
|
||||
to_intel_connector(old_conn_state->connector);
|
||||
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
|
||||
bool last_mst_stream;
|
||||
u32 val;
|
||||
|
||||
intel_dp->active_mst_links--;
|
||||
last_mst_stream = intel_dp->active_mst_links == 0;
|
||||
@ -406,18 +405,14 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
|
||||
|
||||
intel_crtc_vblank_off(old_crtc_state);
|
||||
|
||||
intel_disable_pipe(old_crtc_state);
|
||||
intel_disable_transcoder(old_crtc_state);
|
||||
|
||||
drm_dp_update_payload_part2(&intel_dp->mst_mgr);
|
||||
|
||||
clear_act_sent(encoder, old_crtc_state);
|
||||
|
||||
val = intel_de_read(dev_priv,
|
||||
TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder));
|
||||
val &= ~TRANS_DDI_DP_VC_PAYLOAD_ALLOC;
|
||||
intel_de_write(dev_priv,
|
||||
TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
|
||||
val);
|
||||
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
|
||||
TRANS_DDI_DP_VC_PAYLOAD_ALLOC, 0);
|
||||
|
||||
wait_for_act_sent(encoder, old_crtc_state);
|
||||
|
||||
@ -555,6 +550,17 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
|
||||
|
||||
clear_act_sent(encoder, pipe_config);
|
||||
|
||||
if (intel_dp_is_uhbr(pipe_config)) {
|
||||
const struct drm_display_mode *adjusted_mode =
|
||||
&pipe_config->hw.adjusted_mode;
|
||||
u64 crtc_clock_hz = KHz(adjusted_mode->crtc_clock);
|
||||
|
||||
intel_de_write(dev_priv, TRANS_DP2_VFREQHIGH(pipe_config->cpu_transcoder),
|
||||
TRANS_DP2_VFREQ_PIXEL_CLOCK(crtc_clock_hz >> 24));
|
||||
intel_de_write(dev_priv, TRANS_DP2_VFREQLOW(pipe_config->cpu_transcoder),
|
||||
TRANS_DP2_VFREQ_PIXEL_CLOCK(crtc_clock_hz & 0xffffff));
|
||||
}
|
||||
|
||||
intel_ddi_enable_transcoder_func(encoder, pipe_config);
|
||||
|
||||
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(trans), 0,
|
||||
@ -571,7 +577,7 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
|
||||
intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0,
|
||||
FECSTALL_DIS_DPTSTREAM_DPTTG);
|
||||
|
||||
intel_enable_pipe(pipe_config);
|
||||
intel_enable_transcoder(pipe_config);
|
||||
|
||||
intel_crtc_vblank_on(pipe_config);
|
||||
|
||||
|
@ -23,6 +23,8 @@
|
||||
|
||||
#include "display/intel_dp.h"
|
||||
|
||||
#include "intel_ddi.h"
|
||||
#include "intel_ddi_buf_trans.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dpio_phy.h"
|
||||
@ -266,15 +268,22 @@ void bxt_port_to_phy_channel(struct drm_i915_private *dev_priv, enum port port,
|
||||
*ch = DPIO_CH0;
|
||||
}
|
||||
|
||||
void bxt_ddi_phy_set_signal_level(struct drm_i915_private *dev_priv,
|
||||
enum port port, u32 margin, u32 scale,
|
||||
u32 enable, u32 deemphasis)
|
||||
void bxt_ddi_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
u32 val;
|
||||
enum dpio_phy phy;
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
enum dpio_channel ch;
|
||||
enum dpio_phy phy;
|
||||
int n_entries;
|
||||
u32 val;
|
||||
|
||||
bxt_port_to_phy_channel(dev_priv, port, &phy, &ch);
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
|
||||
bxt_port_to_phy_channel(dev_priv, encoder->port, &phy, &ch);
|
||||
|
||||
/*
|
||||
* While we write to the group register to program all lanes at once we
|
||||
@ -286,12 +295,13 @@ void bxt_ddi_phy_set_signal_level(struct drm_i915_private *dev_priv,
|
||||
|
||||
val = intel_de_read(dev_priv, BXT_PORT_TX_DW2_LN0(phy, ch));
|
||||
val &= ~(MARGIN_000 | UNIQ_TRANS_SCALE);
|
||||
val |= margin << MARGIN_000_SHIFT | scale << UNIQ_TRANS_SCALE_SHIFT;
|
||||
val |= trans->entries[level].bxt.margin << MARGIN_000_SHIFT |
|
||||
trans->entries[level].bxt.scale << UNIQ_TRANS_SCALE_SHIFT;
|
||||
intel_de_write(dev_priv, BXT_PORT_TX_DW2_GRP(phy, ch), val);
|
||||
|
||||
val = intel_de_read(dev_priv, BXT_PORT_TX_DW3_LN0(phy, ch));
|
||||
val &= ~SCALE_DCOMP_METHOD;
|
||||
if (enable)
|
||||
if (trans->entries[level].bxt.enable)
|
||||
val |= SCALE_DCOMP_METHOD;
|
||||
|
||||
if ((val & UNIQUE_TRANGE_EN_METHOD) && !(val & SCALE_DCOMP_METHOD))
|
||||
@ -302,7 +312,7 @@ void bxt_ddi_phy_set_signal_level(struct drm_i915_private *dev_priv,
|
||||
|
||||
val = intel_de_read(dev_priv, BXT_PORT_TX_DW4_LN0(phy, ch));
|
||||
val &= ~DE_EMPHASIS;
|
||||
val |= deemphasis << DEEMPH_SHIFT;
|
||||
val |= trans->entries[level].bxt.deemphasis << DEEMPH_SHIFT;
|
||||
intel_de_write(dev_priv, BXT_PORT_TX_DW4_GRP(phy, ch), val);
|
||||
|
||||
val = intel_de_read(dev_priv, BXT_PORT_PCS_DW10_LN01(phy, ch));
|
||||
|
@ -17,9 +17,8 @@ struct intel_encoder;
|
||||
|
||||
void bxt_port_to_phy_channel(struct drm_i915_private *dev_priv, enum port port,
|
||||
enum dpio_phy *phy, enum dpio_channel *ch);
|
||||
void bxt_ddi_phy_set_signal_level(struct drm_i915_private *dev_priv,
|
||||
enum port port, u32 margin, u32 scale,
|
||||
u32 enable, u32 deemphasis);
|
||||
void bxt_ddi_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void bxt_ddi_phy_init(struct drm_i915_private *dev_priv, enum dpio_phy phy);
|
||||
void bxt_ddi_phy_uninit(struct drm_i915_private *dev_priv, enum dpio_phy phy);
|
||||
bool bxt_ddi_phy_is_enabled(struct drm_i915_private *dev_priv,
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -18,29 +18,25 @@ void intel_dpll_init_clock_hook(struct drm_i915_private *dev_priv);
|
||||
int vlv_calc_dpll_params(int refclk, struct dpll *clock);
|
||||
int pnv_calc_dpll_params(int refclk, struct dpll *clock);
|
||||
int i9xx_calc_dpll_params(int refclk, struct dpll *clock);
|
||||
void vlv_compute_dpll(struct intel_crtc *crtc,
|
||||
struct intel_crtc_state *pipe_config);
|
||||
void chv_compute_dpll(struct intel_crtc *crtc,
|
||||
struct intel_crtc_state *pipe_config);
|
||||
u32 i9xx_dpll_compute_fp(const struct dpll *dpll);
|
||||
void vlv_compute_dpll(struct intel_crtc_state *crtc_state);
|
||||
void chv_compute_dpll(struct intel_crtc_state *crtc_state);
|
||||
|
||||
int vlv_force_pll_on(struct drm_i915_private *dev_priv, enum pipe pipe,
|
||||
const struct dpll *dpll);
|
||||
void vlv_force_pll_off(struct drm_i915_private *dev_priv, enum pipe pipe);
|
||||
void i9xx_enable_pll(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void vlv_enable_pll(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *pipe_config);
|
||||
void chv_enable_pll(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *pipe_config);
|
||||
void vlv_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe);
|
||||
|
||||
void chv_enable_pll(const struct intel_crtc_state *crtc_state);
|
||||
void chv_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe);
|
||||
void vlv_enable_pll(const struct intel_crtc_state *crtc_state);
|
||||
void vlv_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe);
|
||||
void i9xx_enable_pll(const struct intel_crtc_state *crtc_state);
|
||||
void i9xx_disable_pll(const struct intel_crtc_state *crtc_state);
|
||||
void vlv_prepare_pll(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *pipe_config);
|
||||
void chv_prepare_pll(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *pipe_config);
|
||||
bool bxt_find_best_dpll(struct intel_crtc_state *crtc_state,
|
||||
struct dpll *best_clock);
|
||||
int chv_calc_dpll_params(int refclk, struct dpll *pll_clock);
|
||||
|
||||
void assert_pll_enabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_pll_disabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
|
||||
#endif
|
||||
|
@ -26,6 +26,7 @@
|
||||
#include "intel_dpio_phy.h"
|
||||
#include "intel_dpll.h"
|
||||
#include "intel_dpll_mgr.h"
|
||||
#include "intel_tc.h"
|
||||
|
||||
/**
|
||||
* DOC: Display PLLs
|
||||
@ -184,34 +185,6 @@ intel_tc_pll_enable_reg(struct drm_i915_private *i915,
|
||||
return MG_PLL_ENABLE(tc_port);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_prepare_shared_dpll - call a dpll's prepare hook
|
||||
* @crtc_state: CRTC, and its state, which has a shared dpll
|
||||
*
|
||||
* This calls the PLL's prepare hook if it has one and if the PLL is not
|
||||
* already enabled. The prepare hook is platform specific.
|
||||
*/
|
||||
void intel_prepare_shared_dpll(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
struct intel_shared_dpll *pll = crtc_state->shared_dpll;
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, pll == NULL))
|
||||
return;
|
||||
|
||||
mutex_lock(&dev_priv->dpll.lock);
|
||||
drm_WARN_ON(&dev_priv->drm, !pll->state.pipe_mask);
|
||||
if (!pll->active_mask) {
|
||||
drm_dbg(&dev_priv->drm, "setting up %s\n", pll->info->name);
|
||||
drm_WARN_ON(&dev_priv->drm, pll->on);
|
||||
assert_shared_dpll_disabled(dev_priv, pll);
|
||||
|
||||
pll->info->funcs->prepare(dev_priv, pll);
|
||||
}
|
||||
mutex_unlock(&dev_priv->dpll.lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_enable_shared_dpll - enable a CRTC's shared DPLL
|
||||
* @crtc_state: CRTC, and its state, which has a shared DPLL
|
||||
@ -451,15 +424,6 @@ static bool ibx_pch_dpll_get_hw_state(struct drm_i915_private *dev_priv,
|
||||
return val & DPLL_VCO_ENABLE;
|
||||
}
|
||||
|
||||
static void ibx_pch_dpll_prepare(struct drm_i915_private *dev_priv,
|
||||
struct intel_shared_dpll *pll)
|
||||
{
|
||||
const enum intel_dpll_id id = pll->info->id;
|
||||
|
||||
intel_de_write(dev_priv, PCH_FP0(id), pll->state.hw_state.fp0);
|
||||
intel_de_write(dev_priv, PCH_FP1(id), pll->state.hw_state.fp1);
|
||||
}
|
||||
|
||||
static void ibx_assert_pch_refclk_enabled(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
u32 val;
|
||||
@ -481,6 +445,9 @@ static void ibx_pch_dpll_enable(struct drm_i915_private *dev_priv,
|
||||
/* PCH refclock must be enabled first */
|
||||
ibx_assert_pch_refclk_enabled(dev_priv);
|
||||
|
||||
intel_de_write(dev_priv, PCH_FP0(id), pll->state.hw_state.fp0);
|
||||
intel_de_write(dev_priv, PCH_FP1(id), pll->state.hw_state.fp1);
|
||||
|
||||
intel_de_write(dev_priv, PCH_DPLL(id), pll->state.hw_state.dpll);
|
||||
|
||||
/* Wait for the clocks to stabilize. */
|
||||
@ -558,7 +525,6 @@ static void ibx_dump_hw_state(struct drm_i915_private *dev_priv,
|
||||
}
|
||||
|
||||
static const struct intel_shared_dpll_funcs ibx_pch_dpll_funcs = {
|
||||
.prepare = ibx_pch_dpll_prepare,
|
||||
.enable = ibx_pch_dpll_enable,
|
||||
.disable = ibx_pch_dpll_disable,
|
||||
.get_hw_state = ibx_pch_dpll_get_hw_state,
|
||||
@ -3136,8 +3102,8 @@ static void icl_update_active_dpll(struct intel_atomic_state *state,
|
||||
enc_to_dig_port(encoder);
|
||||
|
||||
if (primary_port &&
|
||||
(primary_port->tc_mode == TC_PORT_DP_ALT ||
|
||||
primary_port->tc_mode == TC_PORT_LEGACY))
|
||||
(intel_tc_port_in_dp_alt_mode(primary_port) ||
|
||||
intel_tc_port_in_legacy_mode(primary_port)))
|
||||
port_dpll_id = ICL_PORT_DPLL_MG_PHY;
|
||||
|
||||
icl_set_active_port_dpll(crtc_state, port_dpll_id);
|
||||
|
@ -255,16 +255,6 @@ struct intel_shared_dpll_state {
|
||||
* struct intel_shared_dpll_funcs - platform specific hooks for managing DPLLs
|
||||
*/
|
||||
struct intel_shared_dpll_funcs {
|
||||
/**
|
||||
* @prepare:
|
||||
*
|
||||
* Optional hook to perform operations prior to enabling the PLL.
|
||||
* Called from intel_prepare_shared_dpll() function unless the PLL
|
||||
* is already enabled.
|
||||
*/
|
||||
void (*prepare)(struct drm_i915_private *dev_priv,
|
||||
struct intel_shared_dpll *pll);
|
||||
|
||||
/**
|
||||
* @enable:
|
||||
*
|
||||
@ -404,7 +394,6 @@ int intel_dpll_get_freq(struct drm_i915_private *i915,
|
||||
bool intel_dpll_get_hw_state(struct drm_i915_private *i915,
|
||||
struct intel_shared_dpll *pll,
|
||||
struct intel_dpll_hw_state *hw_state);
|
||||
void intel_prepare_shared_dpll(const struct intel_crtc_state *crtc_state);
|
||||
void intel_enable_shared_dpll(const struct intel_crtc_state *crtc_state);
|
||||
void intel_disable_shared_dpll(const struct intel_crtc_state *crtc_state);
|
||||
void intel_shared_dpll_swap_state(struct intel_atomic_state *state);
|
||||
|
239
drivers/gpu/drm/i915/display/intel_dpt.c
Normal file
239
drivers/gpu/drm/i915/display/intel_dpt.c
Normal file
@ -0,0 +1,239 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dpt.h"
|
||||
#include "intel_fb.h"
|
||||
#include "gt/gen8_ppgtt.h"
|
||||
|
||||
struct i915_dpt {
|
||||
struct i915_address_space vm;
|
||||
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
void __iomem *iomem;
|
||||
};
|
||||
|
||||
#define i915_is_dpt(vm) ((vm)->is_dpt)
|
||||
|
||||
static inline struct i915_dpt *
|
||||
i915_vm_to_dpt(struct i915_address_space *vm)
|
||||
{
|
||||
BUILD_BUG_ON(offsetof(struct i915_dpt, vm));
|
||||
GEM_BUG_ON(!i915_is_dpt(vm));
|
||||
return container_of(vm, struct i915_dpt, vm);
|
||||
}
|
||||
|
||||
#define dpt_total_entries(dpt) ((dpt)->vm.total >> PAGE_SHIFT)
|
||||
|
||||
static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
|
||||
{
|
||||
writeq(pte, addr);
|
||||
}
|
||||
|
||||
static void dpt_insert_page(struct i915_address_space *vm,
|
||||
dma_addr_t addr,
|
||||
u64 offset,
|
||||
enum i915_cache_level level,
|
||||
u32 flags)
|
||||
{
|
||||
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
|
||||
gen8_pte_t __iomem *base = dpt->iomem;
|
||||
|
||||
gen8_set_pte(base + offset / I915_GTT_PAGE_SIZE,
|
||||
vm->pte_encode(addr, level, flags));
|
||||
}
|
||||
|
||||
static void dpt_insert_entries(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
enum i915_cache_level level,
|
||||
u32 flags)
|
||||
{
|
||||
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
|
||||
gen8_pte_t __iomem *base = dpt->iomem;
|
||||
const gen8_pte_t pte_encode = vm->pte_encode(0, level, flags);
|
||||
struct sgt_iter sgt_iter;
|
||||
dma_addr_t addr;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* Note that we ignore PTE_READ_ONLY here. The caller must be careful
|
||||
* not to allow the user to override access to a read only page.
|
||||
*/
|
||||
|
||||
i = vma->node.start / I915_GTT_PAGE_SIZE;
|
||||
for_each_sgt_daddr(addr, sgt_iter, vma->pages)
|
||||
gen8_set_pte(&base[i++], pte_encode | addr);
|
||||
}
|
||||
|
||||
static void dpt_clear_range(struct i915_address_space *vm,
|
||||
u64 start, u64 length)
|
||||
{
|
||||
}
|
||||
|
||||
static void dpt_bind_vma(struct i915_address_space *vm,
|
||||
struct i915_vm_pt_stash *stash,
|
||||
struct i915_vma *vma,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = vma->obj;
|
||||
u32 pte_flags;
|
||||
|
||||
/* Applicable to VLV (gen8+ do not support RO in the GGTT) */
|
||||
pte_flags = 0;
|
||||
if (vma->vm->has_read_only && i915_gem_object_is_readonly(obj))
|
||||
pte_flags |= PTE_READ_ONLY;
|
||||
if (i915_gem_object_is_lmem(obj))
|
||||
pte_flags |= PTE_LM;
|
||||
|
||||
vma->vm->insert_entries(vma->vm, vma, cache_level, pte_flags);
|
||||
|
||||
vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
|
||||
|
||||
/*
|
||||
* Without aliasing PPGTT there's no difference between
|
||||
* GLOBAL/LOCAL_BIND, it's all the same ptes. Hence unconditionally
|
||||
* upgrade to both bound if we bind either to avoid double-binding.
|
||||
*/
|
||||
atomic_or(I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND, &vma->flags);
|
||||
}
|
||||
|
||||
static void dpt_unbind_vma(struct i915_address_space *vm, struct i915_vma *vma)
|
||||
{
|
||||
vm->clear_range(vm, vma->node.start, vma->size);
|
||||
}
|
||||
|
||||
static void dpt_cleanup(struct i915_address_space *vm)
|
||||
{
|
||||
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
|
||||
|
||||
i915_gem_object_put(dpt->obj);
|
||||
}
|
||||
|
||||
struct i915_vma *intel_dpt_pin(struct i915_address_space *vm)
|
||||
{
|
||||
struct drm_i915_private *i915 = vm->i915;
|
||||
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
|
||||
intel_wakeref_t wakeref;
|
||||
struct i915_vma *vma;
|
||||
void __iomem *iomem;
|
||||
struct i915_gem_ww_ctx ww;
|
||||
int err;
|
||||
|
||||
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
|
||||
atomic_inc(&i915->gpu_error.pending_fb_pin);
|
||||
|
||||
for_i915_gem_ww(&ww, err, true) {
|
||||
err = i915_gem_object_lock(dpt->obj, &ww);
|
||||
if (err)
|
||||
continue;
|
||||
|
||||
vma = i915_gem_object_ggtt_pin_ww(dpt->obj, &ww, NULL, 0, 4096,
|
||||
HAS_LMEM(i915) ? 0 : PIN_MAPPABLE);
|
||||
if (IS_ERR(vma)) {
|
||||
err = PTR_ERR(vma);
|
||||
continue;
|
||||
}
|
||||
|
||||
iomem = i915_vma_pin_iomap(vma);
|
||||
i915_vma_unpin(vma);
|
||||
|
||||
if (IS_ERR(iomem)) {
|
||||
err = PTR_ERR(iomem);
|
||||
continue;
|
||||
}
|
||||
|
||||
dpt->vma = vma;
|
||||
dpt->iomem = iomem;
|
||||
|
||||
i915_vma_get(vma);
|
||||
}
|
||||
|
||||
atomic_dec(&i915->gpu_error.pending_fb_pin);
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
|
||||
return err ? ERR_PTR(err) : vma;
|
||||
}
|
||||
|
||||
void intel_dpt_unpin(struct i915_address_space *vm)
|
||||
{
|
||||
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
|
||||
|
||||
i915_vma_unpin_iomap(dpt->vma);
|
||||
i915_vma_put(dpt->vma);
|
||||
}
|
||||
|
||||
struct i915_address_space *
|
||||
intel_dpt_create(struct intel_framebuffer *fb)
|
||||
{
|
||||
struct drm_gem_object *obj = &intel_fb_obj(&fb->base)->base;
|
||||
struct drm_i915_private *i915 = to_i915(obj->dev);
|
||||
struct drm_i915_gem_object *dpt_obj;
|
||||
struct i915_address_space *vm;
|
||||
struct i915_dpt *dpt;
|
||||
size_t size;
|
||||
int ret;
|
||||
|
||||
if (intel_fb_needs_pot_stride_remap(fb))
|
||||
size = intel_remapped_info_size(&fb->remapped_view.gtt.remapped);
|
||||
else
|
||||
size = DIV_ROUND_UP_ULL(obj->size, I915_GTT_PAGE_SIZE);
|
||||
|
||||
size = round_up(size * sizeof(gen8_pte_t), I915_GTT_PAGE_SIZE);
|
||||
|
||||
if (HAS_LMEM(i915))
|
||||
dpt_obj = i915_gem_object_create_lmem(i915, size, 0);
|
||||
else
|
||||
dpt_obj = i915_gem_object_create_stolen(i915, size);
|
||||
if (IS_ERR(dpt_obj))
|
||||
return ERR_CAST(dpt_obj);
|
||||
|
||||
ret = i915_gem_object_set_cache_level(dpt_obj, I915_CACHE_NONE);
|
||||
if (ret) {
|
||||
i915_gem_object_put(dpt_obj);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
dpt = kzalloc(sizeof(*dpt), GFP_KERNEL);
|
||||
if (!dpt) {
|
||||
i915_gem_object_put(dpt_obj);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
vm = &dpt->vm;
|
||||
|
||||
vm->gt = &i915->gt;
|
||||
vm->i915 = i915;
|
||||
vm->dma = i915->drm.dev;
|
||||
vm->total = (size / sizeof(gen8_pte_t)) * I915_GTT_PAGE_SIZE;
|
||||
vm->is_dpt = true;
|
||||
|
||||
i915_address_space_init(vm, VM_CLASS_DPT);
|
||||
|
||||
vm->insert_page = dpt_insert_page;
|
||||
vm->clear_range = dpt_clear_range;
|
||||
vm->insert_entries = dpt_insert_entries;
|
||||
vm->cleanup = dpt_cleanup;
|
||||
|
||||
vm->vma_ops.bind_vma = dpt_bind_vma;
|
||||
vm->vma_ops.unbind_vma = dpt_unbind_vma;
|
||||
vm->vma_ops.set_pages = ggtt_set_pages;
|
||||
vm->vma_ops.clear_pages = clear_pages;
|
||||
|
||||
vm->pte_encode = gen8_ggtt_pte_encode;
|
||||
|
||||
dpt->obj = dpt_obj;
|
||||
|
||||
return &dpt->vm;
|
||||
}
|
||||
|
||||
void intel_dpt_destroy(struct i915_address_space *vm)
|
||||
{
|
||||
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
|
||||
|
||||
i915_vm_close(&dpt->vm);
|
||||
}
|
19
drivers/gpu/drm/i915/display/intel_dpt.h
Normal file
19
drivers/gpu/drm/i915/display/intel_dpt.h
Normal file
@ -0,0 +1,19 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_DPT_H__
|
||||
#define __INTEL_DPT_H__
|
||||
|
||||
struct i915_address_space;
|
||||
struct i915_vma;
|
||||
struct intel_framebuffer;
|
||||
|
||||
void intel_dpt_destroy(struct i915_address_space *vm);
|
||||
struct i915_vma *intel_dpt_pin(struct i915_address_space *vm);
|
||||
void intel_dpt_unpin(struct i915_address_space *vm);
|
||||
struct i915_address_space *
|
||||
intel_dpt_create(struct intel_framebuffer *fb);
|
||||
|
||||
#endif /* __INTEL_DPT_H__ */
|
437
drivers/gpu/drm/i915/display/intel_drrs.c
Normal file
437
drivers/gpu/drm/i915/display/intel_drrs.c
Normal file
@ -0,0 +1,437 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_drrs.h"
|
||||
#include "intel_panel.h"
|
||||
|
||||
/**
|
||||
* DOC: Display Refresh Rate Switching (DRRS)
|
||||
*
|
||||
* Display Refresh Rate Switching (DRRS) is a power conservation feature
|
||||
* which enables swtching between low and high refresh rates,
|
||||
* dynamically, based on the usage scenario. This feature is applicable
|
||||
* for internal panels.
|
||||
*
|
||||
* Indication that the panel supports DRRS is given by the panel EDID, which
|
||||
* would list multiple refresh rates for one resolution.
|
||||
*
|
||||
* DRRS is of 2 types - static and seamless.
|
||||
* Static DRRS involves changing refresh rate (RR) by doing a full modeset
|
||||
* (may appear as a blink on screen) and is used in dock-undock scenario.
|
||||
* Seamless DRRS involves changing RR without any visual effect to the user
|
||||
* and can be used during normal system usage. This is done by programming
|
||||
* certain registers.
|
||||
*
|
||||
* Support for static/seamless DRRS may be indicated in the VBT based on
|
||||
* inputs from the panel spec.
|
||||
*
|
||||
* DRRS saves power by switching to low RR based on usage scenarios.
|
||||
*
|
||||
* The implementation is based on frontbuffer tracking implementation. When
|
||||
* there is a disturbance on the screen triggered by user activity or a periodic
|
||||
* system activity, DRRS is disabled (RR is changed to high RR). When there is
|
||||
* no movement on screen, after a timeout of 1 second, a switch to low RR is
|
||||
* made.
|
||||
*
|
||||
* For integration with frontbuffer tracking code, intel_drrs_invalidate()
|
||||
* and intel_drrs_flush() are called.
|
||||
*
|
||||
* DRRS can be further extended to support other internal panels and also
|
||||
* the scenario of video playback wherein RR is set based on the rate
|
||||
* requested by userspace.
|
||||
*/
|
||||
|
||||
void
|
||||
intel_drrs_compute_config(struct intel_dp *intel_dp,
|
||||
struct intel_crtc_state *pipe_config,
|
||||
int output_bpp, bool constant_n)
|
||||
{
|
||||
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
int pixel_clock;
|
||||
|
||||
if (pipe_config->vrr.enable)
|
||||
return;
|
||||
|
||||
/*
|
||||
* DRRS and PSR can't be enable together, so giving preference to PSR
|
||||
* as it allows more power-savings by complete shutting down display,
|
||||
* so to guarantee this, intel_drrs_compute_config() must be called
|
||||
* after intel_psr_compute_config().
|
||||
*/
|
||||
if (pipe_config->has_psr)
|
||||
return;
|
||||
|
||||
if (!intel_connector->panel.downclock_mode ||
|
||||
dev_priv->drrs.type != SEAMLESS_DRRS_SUPPORT)
|
||||
return;
|
||||
|
||||
pipe_config->has_drrs = true;
|
||||
|
||||
pixel_clock = intel_connector->panel.downclock_mode->clock;
|
||||
if (pipe_config->splitter.enable)
|
||||
pixel_clock /= pipe_config->splitter.link_count;
|
||||
|
||||
intel_link_compute_m_n(output_bpp, pipe_config->lane_count, pixel_clock,
|
||||
pipe_config->port_clock, &pipe_config->dp_m2_n2,
|
||||
constant_n, pipe_config->fec_enable);
|
||||
|
||||
/* FIXME: abstract this better */
|
||||
if (pipe_config->splitter.enable)
|
||||
pipe_config->dp_m2_n2.gmch_m *= pipe_config->splitter.link_count;
|
||||
}
|
||||
|
||||
static void intel_drrs_set_state(struct drm_i915_private *dev_priv,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
enum drrs_refresh_rate_type refresh_type)
|
||||
{
|
||||
struct intel_dp *intel_dp = dev_priv->drrs.dp;
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_display_mode *mode;
|
||||
|
||||
if (!intel_dp) {
|
||||
drm_dbg_kms(&dev_priv->drm, "DRRS not supported.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!crtc) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"DRRS: intel_crtc not initialized\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (dev_priv->drrs.type < SEAMLESS_DRRS_SUPPORT) {
|
||||
drm_dbg_kms(&dev_priv->drm, "Only Seamless DRRS supported.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (refresh_type == dev_priv->drrs.refresh_rate_type)
|
||||
return;
|
||||
|
||||
if (!crtc_state->hw.active) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"eDP encoder disabled. CRTC not Active\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 8 && !IS_CHERRYVIEW(dev_priv)) {
|
||||
switch (refresh_type) {
|
||||
case DRRS_HIGH_RR:
|
||||
intel_dp_set_m_n(crtc_state, M1_N1);
|
||||
break;
|
||||
case DRRS_LOW_RR:
|
||||
intel_dp_set_m_n(crtc_state, M2_N2);
|
||||
break;
|
||||
case DRRS_MAX_RR:
|
||||
default:
|
||||
drm_err(&dev_priv->drm,
|
||||
"Unsupported refreshrate type\n");
|
||||
}
|
||||
} else if (DISPLAY_VER(dev_priv) > 6) {
|
||||
i915_reg_t reg = PIPECONF(crtc_state->cpu_transcoder);
|
||||
u32 val;
|
||||
|
||||
val = intel_de_read(dev_priv, reg);
|
||||
if (refresh_type == DRRS_LOW_RR) {
|
||||
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
|
||||
val |= PIPECONF_EDP_RR_MODE_SWITCH_VLV;
|
||||
else
|
||||
val |= PIPECONF_EDP_RR_MODE_SWITCH;
|
||||
} else {
|
||||
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
|
||||
val &= ~PIPECONF_EDP_RR_MODE_SWITCH_VLV;
|
||||
else
|
||||
val &= ~PIPECONF_EDP_RR_MODE_SWITCH;
|
||||
}
|
||||
intel_de_write(dev_priv, reg, val);
|
||||
}
|
||||
|
||||
dev_priv->drrs.refresh_rate_type = refresh_type;
|
||||
|
||||
if (refresh_type == DRRS_LOW_RR)
|
||||
mode = intel_dp->attached_connector->panel.downclock_mode;
|
||||
else
|
||||
mode = intel_dp->attached_connector->panel.fixed_mode;
|
||||
drm_dbg_kms(&dev_priv->drm, "eDP Refresh Rate set to : %dHz\n",
|
||||
drm_mode_vrefresh(mode));
|
||||
}
|
||||
|
||||
static void
|
||||
intel_drrs_enable_locked(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
dev_priv->drrs.busy_frontbuffer_bits = 0;
|
||||
dev_priv->drrs.dp = intel_dp;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_drrs_enable - init drrs struct if supported
|
||||
* @intel_dp: DP struct
|
||||
* @crtc_state: A pointer to the active crtc state.
|
||||
*
|
||||
* Initializes frontbuffer_bits and drrs.dp
|
||||
*/
|
||||
void intel_drrs_enable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
if (!crtc_state->has_drrs)
|
||||
return;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "Enabling DRRS\n");
|
||||
|
||||
mutex_lock(&dev_priv->drrs.mutex);
|
||||
|
||||
if (dev_priv->drrs.dp) {
|
||||
drm_warn(&dev_priv->drm, "DRRS already enabled\n");
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
intel_drrs_enable_locked(intel_dp);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
}
|
||||
|
||||
static void
|
||||
intel_drrs_disable_locked(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
intel_drrs_set_state(dev_priv, crtc_state, DRRS_HIGH_RR);
|
||||
dev_priv->drrs.dp = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_drrs_disable - Disable DRRS
|
||||
* @intel_dp: DP struct
|
||||
* @old_crtc_state: Pointer to old crtc_state.
|
||||
*
|
||||
*/
|
||||
void intel_drrs_disable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *old_crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
if (!old_crtc_state->has_drrs)
|
||||
return;
|
||||
|
||||
mutex_lock(&dev_priv->drrs.mutex);
|
||||
if (!dev_priv->drrs.dp) {
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
return;
|
||||
}
|
||||
|
||||
intel_drrs_disable_locked(intel_dp, old_crtc_state);
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
|
||||
cancel_delayed_work_sync(&dev_priv->drrs.work);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_drrs_update - Update DRRS state
|
||||
* @intel_dp: Intel DP
|
||||
* @crtc_state: new CRTC state
|
||||
*
|
||||
* This function will update DRRS states, disabling or enabling DRRS when
|
||||
* executing fastsets. For full modeset, intel_drrs_disable() and
|
||||
* intel_drrs_enable() should be called instead.
|
||||
*/
|
||||
void
|
||||
intel_drrs_update(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
if (dev_priv->drrs.type != SEAMLESS_DRRS_SUPPORT)
|
||||
return;
|
||||
|
||||
mutex_lock(&dev_priv->drrs.mutex);
|
||||
|
||||
/* New state matches current one? */
|
||||
if (crtc_state->has_drrs == !!dev_priv->drrs.dp)
|
||||
goto unlock;
|
||||
|
||||
if (crtc_state->has_drrs)
|
||||
intel_drrs_enable_locked(intel_dp);
|
||||
else
|
||||
intel_drrs_disable_locked(intel_dp, crtc_state);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
}
|
||||
|
||||
static void intel_drrs_downclock_work(struct work_struct *work)
|
||||
{
|
||||
struct drm_i915_private *dev_priv =
|
||||
container_of(work, typeof(*dev_priv), drrs.work.work);
|
||||
struct intel_dp *intel_dp;
|
||||
struct drm_crtc *crtc;
|
||||
|
||||
mutex_lock(&dev_priv->drrs.mutex);
|
||||
|
||||
intel_dp = dev_priv->drrs.dp;
|
||||
|
||||
if (!intel_dp)
|
||||
goto unlock;
|
||||
|
||||
/*
|
||||
* The delayed work can race with an invalidate hence we need to
|
||||
* recheck.
|
||||
*/
|
||||
|
||||
if (dev_priv->drrs.busy_frontbuffer_bits)
|
||||
goto unlock;
|
||||
|
||||
crtc = dp_to_dig_port(intel_dp)->base.base.crtc;
|
||||
intel_drrs_set_state(dev_priv, to_intel_crtc(crtc)->config, DRRS_LOW_RR);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
}
|
||||
|
||||
static void intel_drrs_frontbuffer_update(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits,
|
||||
bool invalidate)
|
||||
{
|
||||
struct intel_dp *intel_dp;
|
||||
struct drm_crtc *crtc;
|
||||
enum pipe pipe;
|
||||
|
||||
if (dev_priv->drrs.type == DRRS_NOT_SUPPORTED)
|
||||
return;
|
||||
|
||||
cancel_delayed_work(&dev_priv->drrs.work);
|
||||
|
||||
mutex_lock(&dev_priv->drrs.mutex);
|
||||
|
||||
intel_dp = dev_priv->drrs.dp;
|
||||
if (!intel_dp) {
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
return;
|
||||
}
|
||||
|
||||
crtc = dp_to_dig_port(intel_dp)->base.base.crtc;
|
||||
pipe = to_intel_crtc(crtc)->pipe;
|
||||
|
||||
frontbuffer_bits &= INTEL_FRONTBUFFER_ALL_MASK(pipe);
|
||||
if (invalidate)
|
||||
dev_priv->drrs.busy_frontbuffer_bits |= frontbuffer_bits;
|
||||
else
|
||||
dev_priv->drrs.busy_frontbuffer_bits &= ~frontbuffer_bits;
|
||||
|
||||
/* flush/invalidate means busy screen hence upclock */
|
||||
if (frontbuffer_bits)
|
||||
intel_drrs_set_state(dev_priv, to_intel_crtc(crtc)->config,
|
||||
DRRS_HIGH_RR);
|
||||
|
||||
/*
|
||||
* flush also means no more activity hence schedule downclock, if all
|
||||
* other fbs are quiescent too
|
||||
*/
|
||||
if (!invalidate && !dev_priv->drrs.busy_frontbuffer_bits)
|
||||
schedule_delayed_work(&dev_priv->drrs.work,
|
||||
msecs_to_jiffies(1000));
|
||||
mutex_unlock(&dev_priv->drrs.mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_drrs_invalidate - Disable Idleness DRRS
|
||||
* @dev_priv: i915 device
|
||||
* @frontbuffer_bits: frontbuffer plane tracking bits
|
||||
*
|
||||
* This function gets called everytime rendering on the given planes start.
|
||||
* Hence DRRS needs to be Upclocked, i.e. (LOW_RR -> HIGH_RR).
|
||||
*
|
||||
* Dirty frontbuffers relevant to DRRS are tracked in busy_frontbuffer_bits.
|
||||
*/
|
||||
void intel_drrs_invalidate(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits)
|
||||
{
|
||||
intel_drrs_frontbuffer_update(dev_priv, frontbuffer_bits, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_drrs_flush - Restart Idleness DRRS
|
||||
* @dev_priv: i915 device
|
||||
* @frontbuffer_bits: frontbuffer plane tracking bits
|
||||
*
|
||||
* This function gets called every time rendering on the given planes has
|
||||
* completed or flip on a crtc is completed. So DRRS should be upclocked
|
||||
* (LOW_RR -> HIGH_RR). And also Idleness detection should be started again,
|
||||
* if no other planes are dirty.
|
||||
*
|
||||
* Dirty frontbuffers relevant to DRRS are tracked in busy_frontbuffer_bits.
|
||||
*/
|
||||
void intel_drrs_flush(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits)
|
||||
{
|
||||
intel_drrs_frontbuffer_update(dev_priv, frontbuffer_bits, false);
|
||||
}
|
||||
|
||||
void intel_drrs_page_flip(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
unsigned int frontbuffer_bits = INTEL_FRONTBUFFER_ALL_MASK(crtc->pipe);
|
||||
|
||||
intel_drrs_frontbuffer_update(dev_priv, frontbuffer_bits, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_drrs_init - Init basic DRRS work and mutex.
|
||||
* @connector: eDP connector
|
||||
* @fixed_mode: preferred mode of panel
|
||||
*
|
||||
* This function is called only once at driver load to initialize basic
|
||||
* DRRS stuff.
|
||||
*
|
||||
* Returns:
|
||||
* Downclock mode if panel supports it, else return NULL.
|
||||
* DRRS support is determined by the presence of downclock mode (apart
|
||||
* from VBT setting).
|
||||
*/
|
||||
struct drm_display_mode *
|
||||
intel_drrs_init(struct intel_connector *connector,
|
||||
struct drm_display_mode *fixed_mode)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
|
||||
struct drm_display_mode *downclock_mode = NULL;
|
||||
|
||||
INIT_DELAYED_WORK(&dev_priv->drrs.work, intel_drrs_downclock_work);
|
||||
mutex_init(&dev_priv->drrs.mutex);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) <= 6) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"DRRS supported for Gen7 and above\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (dev_priv->vbt.drrs_type != SEAMLESS_DRRS_SUPPORT) {
|
||||
drm_dbg_kms(&dev_priv->drm, "VBT doesn't support DRRS\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
downclock_mode = intel_panel_edid_downclock_mode(connector, fixed_mode);
|
||||
if (!downclock_mode) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"Downclock mode is not found. DRRS not supported\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
dev_priv->drrs.type = dev_priv->vbt.drrs_type;
|
||||
|
||||
dev_priv->drrs.refresh_rate_type = DRRS_HIGH_RR;
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"seamless DRRS supported for eDP panel.\n");
|
||||
return downclock_mode;
|
||||
}
|
36
drivers/gpu/drm/i915/display/intel_drrs.h
Normal file
36
drivers/gpu/drm/i915/display/intel_drrs.h
Normal file
@ -0,0 +1,36 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_DRRS_H__
|
||||
#define __INTEL_DRRS_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_i915_private;
|
||||
struct intel_atomic_state;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_connector;
|
||||
struct intel_dp;
|
||||
|
||||
void intel_drrs_enable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_drrs_disable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_drrs_update(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_drrs_invalidate(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits);
|
||||
void intel_drrs_flush(struct drm_i915_private *dev_priv,
|
||||
unsigned int frontbuffer_bits);
|
||||
void intel_drrs_page_flip(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_drrs_compute_config(struct intel_dp *intel_dp,
|
||||
struct intel_crtc_state *pipe_config,
|
||||
int output_bpp, bool constant_n);
|
||||
struct drm_display_mode *intel_drrs_init(struct intel_connector *connector,
|
||||
struct drm_display_mode *fixed_mode);
|
||||
|
||||
#endif /* __INTEL_DRRS_H__ */
|
@ -5,6 +5,7 @@
|
||||
|
||||
#include <drm/drm_mipi_dsi.h>
|
||||
#include "intel_dsi.h"
|
||||
#include "intel_panel.h"
|
||||
|
||||
int intel_dsi_bitrate(const struct intel_dsi *intel_dsi)
|
||||
{
|
||||
@ -60,20 +61,19 @@ enum drm_mode_status intel_dsi_mode_valid(struct drm_connector *connector,
|
||||
struct intel_connector *intel_connector = to_intel_connector(connector);
|
||||
const struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
|
||||
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
|
||||
enum drm_mode_status status;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "\n");
|
||||
|
||||
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
return MODE_NO_DBLESCAN;
|
||||
|
||||
if (fixed_mode) {
|
||||
if (mode->hdisplay > fixed_mode->hdisplay)
|
||||
return MODE_PANEL;
|
||||
if (mode->vdisplay > fixed_mode->vdisplay)
|
||||
return MODE_PANEL;
|
||||
if (fixed_mode->clock > max_dotclk)
|
||||
return MODE_CLOCK_HIGH;
|
||||
}
|
||||
status = intel_panel_mode_valid(intel_connector, mode);
|
||||
if (status != MODE_OK)
|
||||
return status;
|
||||
|
||||
if (fixed_mode->clock > max_dotclk)
|
||||
return MODE_CLOCK_HIGH;
|
||||
|
||||
return intel_mode_valid_max_plane_size(dev_priv, mode, false);
|
||||
}
|
||||
|
@ -207,6 +207,9 @@ u32 bxt_dsi_get_pclk(struct intel_encoder *encoder,
|
||||
struct intel_crtc_state *config);
|
||||
void bxt_dsi_reset_clocks(struct intel_encoder *encoder, enum port port);
|
||||
|
||||
void assert_dsi_pll_enabled(struct drm_i915_private *i915);
|
||||
void assert_dsi_pll_disabled(struct drm_i915_private *i915);
|
||||
|
||||
/* intel_dsi_vbt.c */
|
||||
bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id);
|
||||
void intel_dsi_vbt_gpio_init(struct intel_dsi *intel_dsi, bool panel_is_on);
|
||||
|
@ -47,33 +47,42 @@ static u32 dcs_get_backlight(struct intel_connector *connector, enum pipe unused
|
||||
{
|
||||
struct intel_encoder *encoder = intel_attached_encoder(connector);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
|
||||
struct intel_panel *panel = &connector->panel;
|
||||
struct mipi_dsi_device *dsi_device;
|
||||
u8 data = 0;
|
||||
u8 data[2] = {};
|
||||
enum port port;
|
||||
size_t len = panel->backlight.max > U8_MAX ? 2 : 1;
|
||||
|
||||
/* FIXME: Need to take care of 16 bit brightness level */
|
||||
for_each_dsi_port(port, intel_dsi->dcs_backlight_ports) {
|
||||
dsi_device = intel_dsi->dsi_hosts[port]->device;
|
||||
mipi_dsi_dcs_read(dsi_device, MIPI_DCS_GET_DISPLAY_BRIGHTNESS,
|
||||
&data, sizeof(data));
|
||||
&data, len);
|
||||
break;
|
||||
}
|
||||
|
||||
return data;
|
||||
return (data[1] << 8) | data[0];
|
||||
}
|
||||
|
||||
static void dcs_set_backlight(const struct drm_connector_state *conn_state, u32 level)
|
||||
{
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(to_intel_encoder(conn_state->best_encoder));
|
||||
struct intel_panel *panel = &to_intel_connector(conn_state->connector)->panel;
|
||||
struct mipi_dsi_device *dsi_device;
|
||||
u8 data = level;
|
||||
u8 data[2] = {};
|
||||
enum port port;
|
||||
size_t len = panel->backlight.max > U8_MAX ? 2 : 1;
|
||||
|
||||
if (len == 1) {
|
||||
data[0] = level;
|
||||
} else {
|
||||
data[0] = level >> 8;
|
||||
data[1] = level;
|
||||
}
|
||||
|
||||
/* FIXME: Need to take care of 16 bit brightness level */
|
||||
for_each_dsi_port(port, intel_dsi->dcs_backlight_ports) {
|
||||
dsi_device = intel_dsi->dsi_hosts[port]->device;
|
||||
mipi_dsi_dcs_write(dsi_device, MIPI_DCS_SET_DISPLAY_BRIGHTNESS,
|
||||
&data, sizeof(data));
|
||||
&data, len);
|
||||
}
|
||||
}
|
||||
|
||||
@ -147,10 +156,16 @@ static void dcs_enable_backlight(const struct intel_crtc_state *crtc_state,
|
||||
static int dcs_setup_backlight(struct intel_connector *connector,
|
||||
enum pipe unused)
|
||||
{
|
||||
struct drm_device *dev = connector->base.dev;
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
struct intel_panel *panel = &connector->panel;
|
||||
|
||||
panel->backlight.max = PANEL_PWM_MAX_VALUE;
|
||||
panel->backlight.level = PANEL_PWM_MAX_VALUE;
|
||||
if (dev_priv->vbt.backlight.brightness_precision_bits > 8)
|
||||
panel->backlight.max = (1 << dev_priv->vbt.backlight.brightness_precision_bits) - 1;
|
||||
else
|
||||
panel->backlight.max = PANEL_PWM_MAX_VALUE;
|
||||
|
||||
panel->backlight.level = panel->backlight.max;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -223,9 +223,10 @@ static enum drm_mode_status
|
||||
intel_dvo_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode)
|
||||
{
|
||||
struct intel_dvo *intel_dvo = intel_attached_dvo(to_intel_connector(connector));
|
||||
struct intel_connector *intel_connector = to_intel_connector(connector);
|
||||
struct intel_dvo *intel_dvo = intel_attached_dvo(intel_connector);
|
||||
const struct drm_display_mode *fixed_mode =
|
||||
to_intel_connector(connector)->panel.fixed_mode;
|
||||
intel_connector->panel.fixed_mode;
|
||||
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
|
||||
int target_clock = mode->clock;
|
||||
|
||||
@ -235,10 +236,11 @@ intel_dvo_mode_valid(struct drm_connector *connector,
|
||||
/* XXX: Validate clock range */
|
||||
|
||||
if (fixed_mode) {
|
||||
if (mode->hdisplay > fixed_mode->hdisplay)
|
||||
return MODE_PANEL;
|
||||
if (mode->vdisplay > fixed_mode->vdisplay)
|
||||
return MODE_PANEL;
|
||||
enum drm_mode_status status;
|
||||
|
||||
status = intel_panel_mode_valid(intel_connector, mode);
|
||||
if (status != MODE_OK)
|
||||
return status;
|
||||
|
||||
target_clock = fixed_mode->clock;
|
||||
}
|
||||
@ -254,6 +256,7 @@ static int intel_dvo_compute_config(struct intel_encoder *encoder,
|
||||
struct drm_connector_state *conn_state)
|
||||
{
|
||||
struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
|
||||
struct intel_connector *connector = to_intel_connector(conn_state->connector);
|
||||
const struct drm_display_mode *fixed_mode =
|
||||
intel_dvo->attached_connector->panel.fixed_mode;
|
||||
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
|
||||
@ -264,8 +267,13 @@ static int intel_dvo_compute_config(struct intel_encoder *encoder,
|
||||
* with the panel scaling set up to source from the H/VDisplay
|
||||
* of the original mode.
|
||||
*/
|
||||
if (fixed_mode)
|
||||
intel_fixed_panel_mode(fixed_mode, adjusted_mode);
|
||||
if (fixed_mode) {
|
||||
int ret;
|
||||
|
||||
ret = intel_panel_compute_config(connector, adjusted_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
return -EINVAL;
|
||||
|
@ -4,9 +4,11 @@
|
||||
*/
|
||||
|
||||
#include <drm/drm_framebuffer.h>
|
||||
#include <drm/drm_modeset_helper.h>
|
||||
|
||||
#include "intel_display.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dpt.h"
|
||||
#include "intel_fb.h"
|
||||
|
||||
#define check_array_bounds(i915, a, i) drm_WARN_ON(&(i915)->drm, (i) >= ARRAY_SIZE(a))
|
||||
@ -61,6 +63,38 @@ int skl_ccs_to_main_plane(const struct drm_framebuffer *fb, int ccs_plane)
|
||||
return ccs_plane - fb->format->num_planes / 2;
|
||||
}
|
||||
|
||||
static unsigned int gen12_aligned_scanout_stride(const struct intel_framebuffer *fb,
|
||||
int color_plane)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(fb->base.dev);
|
||||
unsigned int stride = fb->base.pitches[color_plane];
|
||||
|
||||
if (IS_ALDERLAKE_P(i915))
|
||||
return roundup_pow_of_two(max(stride,
|
||||
8u * intel_tile_width_bytes(&fb->base, color_plane)));
|
||||
|
||||
return stride;
|
||||
}
|
||||
|
||||
static unsigned int gen12_ccs_aux_stride(struct intel_framebuffer *fb, int ccs_plane)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(fb->base.dev);
|
||||
int main_plane = skl_ccs_to_main_plane(&fb->base, ccs_plane);
|
||||
unsigned int main_stride = fb->base.pitches[main_plane];
|
||||
unsigned int main_tile_width = intel_tile_width_bytes(&fb->base, main_plane);
|
||||
|
||||
/*
|
||||
* On ADL-P the AUX stride must align with a power-of-two aligned main
|
||||
* surface stride. The stride of the allocated main surface object can
|
||||
* be less than this POT stride, which is then autopadded to the POT
|
||||
* size.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(i915))
|
||||
main_stride = gen12_aligned_scanout_stride(fb, main_plane);
|
||||
|
||||
return DIV_ROUND_UP(main_stride, 4 * main_tile_width) * 64;
|
||||
}
|
||||
|
||||
int skl_main_to_aux_plane(const struct drm_framebuffer *fb, int main_plane)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(fb->dev);
|
||||
@ -79,16 +113,70 @@ unsigned int intel_tile_size(const struct drm_i915_private *i915)
|
||||
return DISPLAY_VER(i915) == 2 ? 2048 : 4096;
|
||||
}
|
||||
|
||||
unsigned int
|
||||
intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(fb->dev);
|
||||
unsigned int cpp = fb->format->cpp[color_plane];
|
||||
|
||||
switch (fb->modifier) {
|
||||
case DRM_FORMAT_MOD_LINEAR:
|
||||
return intel_tile_size(dev_priv);
|
||||
case I915_FORMAT_MOD_X_TILED:
|
||||
if (DISPLAY_VER(dev_priv) == 2)
|
||||
return 128;
|
||||
else
|
||||
return 512;
|
||||
case I915_FORMAT_MOD_Y_TILED_CCS:
|
||||
if (is_ccs_plane(fb, color_plane))
|
||||
return 128;
|
||||
fallthrough;
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS:
|
||||
if (is_ccs_plane(fb, color_plane))
|
||||
return 64;
|
||||
fallthrough;
|
||||
case I915_FORMAT_MOD_Y_TILED:
|
||||
if (DISPLAY_VER(dev_priv) == 2 || HAS_128_BYTE_Y_TILING(dev_priv))
|
||||
return 128;
|
||||
else
|
||||
return 512;
|
||||
case I915_FORMAT_MOD_Yf_TILED_CCS:
|
||||
if (is_ccs_plane(fb, color_plane))
|
||||
return 128;
|
||||
fallthrough;
|
||||
case I915_FORMAT_MOD_Yf_TILED:
|
||||
switch (cpp) {
|
||||
case 1:
|
||||
return 64;
|
||||
case 2:
|
||||
case 4:
|
||||
return 128;
|
||||
case 8:
|
||||
case 16:
|
||||
return 256;
|
||||
default:
|
||||
MISSING_CASE(cpp);
|
||||
return cpp;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(fb->modifier);
|
||||
return cpp;
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int intel_tile_height(const struct drm_framebuffer *fb, int color_plane)
|
||||
{
|
||||
if (is_gen12_ccs_plane(fb, color_plane))
|
||||
return 1;
|
||||
|
||||
return intel_tile_size(to_i915(fb->dev)) /
|
||||
intel_tile_width_bytes(fb, color_plane);
|
||||
}
|
||||
|
||||
/* Return the tile dimensions in pixel units */
|
||||
/*
|
||||
* Return the tile dimensions in pixel units, based on the (2 or 4 kbyte) GTT
|
||||
* page tile size.
|
||||
*/
|
||||
static void intel_tile_dims(const struct drm_framebuffer *fb, int color_plane,
|
||||
unsigned int *tile_width,
|
||||
unsigned int *tile_height)
|
||||
@ -100,6 +188,21 @@ static void intel_tile_dims(const struct drm_framebuffer *fb, int color_plane,
|
||||
*tile_height = intel_tile_height(fb, color_plane);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the tile dimensions in pixel units, based on the tile block size.
|
||||
* The block covers the full GTT page sized tile on all tiled surfaces and
|
||||
* it's a 64 byte portion of the tile on TGL+ CCS surfaces.
|
||||
*/
|
||||
static void intel_tile_block_dims(const struct drm_framebuffer *fb, int color_plane,
|
||||
unsigned int *tile_width,
|
||||
unsigned int *tile_height)
|
||||
{
|
||||
intel_tile_dims(fb, color_plane, tile_width, tile_height);
|
||||
|
||||
if (is_gen12_ccs_plane(fb, color_plane))
|
||||
*tile_height = 1;
|
||||
}
|
||||
|
||||
unsigned int intel_tile_row_size(const struct drm_framebuffer *fb, int color_plane)
|
||||
{
|
||||
unsigned int tile_width, tile_height;
|
||||
@ -109,6 +212,31 @@ unsigned int intel_tile_row_size(const struct drm_framebuffer *fb, int color_pla
|
||||
return fb->pitches[color_plane] * tile_height;
|
||||
}
|
||||
|
||||
unsigned int
|
||||
intel_fb_align_height(const struct drm_framebuffer *fb,
|
||||
int color_plane, unsigned int height)
|
||||
{
|
||||
unsigned int tile_height = intel_tile_height(fb, color_plane);
|
||||
|
||||
return ALIGN(height, tile_height);
|
||||
}
|
||||
|
||||
static unsigned int intel_fb_modifier_to_tiling(u64 fb_modifier)
|
||||
{
|
||||
switch (fb_modifier) {
|
||||
case I915_FORMAT_MOD_X_TILED:
|
||||
return I915_TILING_X;
|
||||
case I915_FORMAT_MOD_Y_TILED:
|
||||
case I915_FORMAT_MOD_Y_TILED_CCS:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS:
|
||||
return I915_TILING_Y;
|
||||
default:
|
||||
return I915_TILING_NONE;
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int intel_cursor_alignment(const struct drm_i915_private *i915)
|
||||
{
|
||||
if (IS_I830(i915))
|
||||
@ -121,6 +249,70 @@ unsigned int intel_cursor_alignment(const struct drm_i915_private *i915)
|
||||
return 4 * 1024;
|
||||
}
|
||||
|
||||
static unsigned int intel_linear_alignment(const struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
return 256 * 1024;
|
||||
else if (IS_I965G(dev_priv) || IS_I965GM(dev_priv) ||
|
||||
IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
|
||||
return 128 * 1024;
|
||||
else if (DISPLAY_VER(dev_priv) >= 4)
|
||||
return 4 * 1024;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
unsigned int intel_surf_alignment(const struct drm_framebuffer *fb,
|
||||
int color_plane)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(fb->dev);
|
||||
|
||||
if (intel_fb_uses_dpt(fb))
|
||||
return 512 * 4096;
|
||||
|
||||
/* AUX_DIST needs only 4K alignment */
|
||||
if (is_ccs_plane(fb, color_plane))
|
||||
return 4096;
|
||||
|
||||
if (is_semiplanar_uv_plane(fb, color_plane)) {
|
||||
/*
|
||||
* TODO: cross-check wrt. the bspec stride in bytes * 64 bytes
|
||||
* alignment for linear UV planes on all platforms.
|
||||
*/
|
||||
if (DISPLAY_VER(dev_priv) >= 12) {
|
||||
if (fb->modifier == DRM_FORMAT_MOD_LINEAR)
|
||||
return intel_linear_alignment(dev_priv);
|
||||
|
||||
return intel_tile_row_size(fb, color_plane);
|
||||
}
|
||||
|
||||
return 4096;
|
||||
}
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm, color_plane != 0);
|
||||
|
||||
switch (fb->modifier) {
|
||||
case DRM_FORMAT_MOD_LINEAR:
|
||||
return intel_linear_alignment(dev_priv);
|
||||
case I915_FORMAT_MOD_X_TILED:
|
||||
if (HAS_ASYNC_FLIPS(dev_priv))
|
||||
return 256 * 1024;
|
||||
return 0;
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS:
|
||||
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC:
|
||||
return 16 * 1024;
|
||||
case I915_FORMAT_MOD_Y_TILED_CCS:
|
||||
case I915_FORMAT_MOD_Yf_TILED_CCS:
|
||||
case I915_FORMAT_MOD_Y_TILED:
|
||||
case I915_FORMAT_MOD_Yf_TILED:
|
||||
return 1 * 1024 * 1024;
|
||||
default:
|
||||
MISSING_CASE(fb->modifier);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
void intel_fb_plane_get_subsampling(int *hsub, int *vsub,
|
||||
const struct drm_framebuffer *fb,
|
||||
int color_plane)
|
||||
@ -165,15 +357,29 @@ void intel_fb_plane_get_subsampling(int *hsub, int *vsub,
|
||||
|
||||
static void intel_fb_plane_dims(const struct intel_framebuffer *fb, int color_plane, int *w, int *h)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(fb->base.dev);
|
||||
int main_plane = is_ccs_plane(&fb->base, color_plane) ?
|
||||
skl_ccs_to_main_plane(&fb->base, color_plane) : 0;
|
||||
unsigned int main_width = fb->base.width;
|
||||
unsigned int main_height = fb->base.height;
|
||||
int main_hsub, main_vsub;
|
||||
int hsub, vsub;
|
||||
|
||||
/*
|
||||
* On ADL-P the CCS AUX surface layout always aligns with the
|
||||
* power-of-two aligned main surface stride. The main surface
|
||||
* stride in the allocated FB object may not be power-of-two
|
||||
* sized, in which case it is auto-padded to the POT size.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(i915) && is_ccs_plane(&fb->base, color_plane))
|
||||
main_width = gen12_aligned_scanout_stride(fb, 0) /
|
||||
fb->base.format->cpp[0];
|
||||
|
||||
intel_fb_plane_get_subsampling(&main_hsub, &main_vsub, &fb->base, main_plane);
|
||||
intel_fb_plane_get_subsampling(&hsub, &vsub, &fb->base, color_plane);
|
||||
*w = fb->base.width / main_hsub / hsub;
|
||||
*h = fb->base.height / main_vsub / vsub;
|
||||
|
||||
*w = main_width / main_hsub / hsub;
|
||||
*h = main_height / main_vsub / vsub;
|
||||
}
|
||||
|
||||
static u32 intel_adjust_tile_offset(int *x, int *y,
|
||||
@ -355,17 +561,8 @@ static int intel_fb_offset_to_xy(int *x, int *y,
|
||||
unsigned int height;
|
||||
u32 alignment;
|
||||
|
||||
/*
|
||||
* All DPT color planes must be 512*4k aligned (the amount mapped by a
|
||||
* single DPT page). For ADL_P CCS FBs this only works by requiring
|
||||
* the allocated offsets to be 2MB aligned. Once supoort to remap
|
||||
* such FBs is added we can remove this requirement, as then all the
|
||||
* planes can be remapped to an aligned offset.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(i915) && is_ccs_modifier(fb->modifier))
|
||||
alignment = 512 * 4096;
|
||||
else if (DISPLAY_VER(i915) >= 12 &&
|
||||
is_semiplanar_uv_plane(fb, color_plane))
|
||||
if (DISPLAY_VER(i915) >= 12 &&
|
||||
is_semiplanar_uv_plane(fb, color_plane))
|
||||
alignment = intel_tile_row_size(fb, color_plane);
|
||||
else if (fb->modifier != DRM_FORMAT_MOD_LINEAR)
|
||||
alignment = intel_tile_size(i915);
|
||||
@ -416,7 +613,12 @@ static int intel_fb_check_ccs_xy(const struct drm_framebuffer *fb, int ccs_plane
|
||||
if (!is_ccs_plane(fb, ccs_plane) || is_gen12_ccs_cc_plane(fb, ccs_plane))
|
||||
return 0;
|
||||
|
||||
intel_tile_dims(fb, ccs_plane, &tile_width, &tile_height);
|
||||
/*
|
||||
* While all the tile dimensions are based on a 2k or 4k GTT page size
|
||||
* here the main and CCS coordinates must match only within a (64 byte
|
||||
* on TGL+) block inside the tile.
|
||||
*/
|
||||
intel_tile_block_dims(fb, ccs_plane, &tile_width, &tile_height);
|
||||
intel_fb_plane_get_subsampling(&hsub, &vsub, fb, ccs_plane);
|
||||
|
||||
tile_width *= hsub;
|
||||
@ -491,8 +693,7 @@ bool intel_fb_needs_pot_stride_remap(const struct intel_framebuffer *fb)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(fb->base.dev);
|
||||
|
||||
return IS_ALDERLAKE_P(i915) && fb->base.modifier != DRM_FORMAT_MOD_LINEAR &&
|
||||
!is_ccs_modifier(fb->base.modifier);
|
||||
return IS_ALDERLAKE_P(i915) && fb->base.modifier != DRM_FORMAT_MOD_LINEAR;
|
||||
}
|
||||
|
||||
static int intel_fb_pitch(const struct intel_framebuffer *fb, int color_plane, unsigned int rotation)
|
||||
@ -612,14 +813,16 @@ static unsigned int
|
||||
plane_view_dst_stride_tiles(const struct intel_framebuffer *fb, int color_plane,
|
||||
unsigned int pitch_tiles)
|
||||
{
|
||||
if (intel_fb_needs_pot_stride_remap(fb))
|
||||
if (intel_fb_needs_pot_stride_remap(fb)) {
|
||||
unsigned int min_stride = is_ccs_plane(&fb->base, color_plane) ? 2 : 8;
|
||||
/*
|
||||
* ADL_P, the only platform needing a POT stride has a minimum
|
||||
* of 8 stride tiles.
|
||||
* of 8 main surface and 2 CCS AUX stride tiles.
|
||||
*/
|
||||
return roundup_pow_of_two(max(pitch_tiles, 8u));
|
||||
else
|
||||
return roundup_pow_of_two(max(pitch_tiles, min_stride));
|
||||
} else {
|
||||
return pitch_tiles;
|
||||
}
|
||||
}
|
||||
|
||||
static unsigned int
|
||||
@ -655,7 +858,7 @@ static u32 calc_plane_remap_info(const struct intel_framebuffer *fb, int color_p
|
||||
unsigned int tile_height = dims->tile_height;
|
||||
unsigned int tile_size = intel_tile_size(i915);
|
||||
struct drm_rect r;
|
||||
u32 size;
|
||||
u32 size = 0;
|
||||
|
||||
assign_chk_ovf(i915, remap_info->offset, obj_offset);
|
||||
assign_chk_ovf(i915, remap_info->src_stride, plane_view_src_stride_tiles(fb, color_plane, dims));
|
||||
@ -680,7 +883,7 @@ static u32 calc_plane_remap_info(const struct intel_framebuffer *fb, int color_p
|
||||
|
||||
color_plane_info->stride = remap_info->dst_stride * tile_height;
|
||||
|
||||
size = remap_info->dst_stride * remap_info->width;
|
||||
size += remap_info->dst_stride * remap_info->width;
|
||||
|
||||
/* rotate the tile dimensions to match the GTT view */
|
||||
swap(tile_width, tile_height);
|
||||
@ -689,6 +892,14 @@ static u32 calc_plane_remap_info(const struct intel_framebuffer *fb, int color_p
|
||||
|
||||
check_array_bounds(i915, view->gtt.remapped.plane, color_plane);
|
||||
|
||||
if (view->gtt.remapped.plane_alignment) {
|
||||
unsigned int aligned_offset = ALIGN(gtt_offset,
|
||||
view->gtt.remapped.plane_alignment);
|
||||
|
||||
size += aligned_offset - gtt_offset;
|
||||
gtt_offset = aligned_offset;
|
||||
}
|
||||
|
||||
assign_chk_ovf(i915, remap_info->dst_stride,
|
||||
plane_view_dst_stride_tiles(fb, color_plane, remap_info->width));
|
||||
|
||||
@ -698,7 +909,7 @@ static u32 calc_plane_remap_info(const struct intel_framebuffer *fb, int color_p
|
||||
color_plane_info->stride = remap_info->dst_stride * tile_width *
|
||||
fb->base.format->cpp[color_plane];
|
||||
|
||||
size = remap_info->dst_stride * remap_info->height;
|
||||
size += remap_info->dst_stride * remap_info->height;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -745,10 +956,14 @@ calc_plane_normal_size(const struct intel_framebuffer *fb, int color_plane,
|
||||
return tiles;
|
||||
}
|
||||
|
||||
static void intel_fb_view_init(struct intel_fb_view *view, enum i915_ggtt_view_type view_type)
|
||||
static void intel_fb_view_init(struct drm_i915_private *i915, struct intel_fb_view *view,
|
||||
enum i915_ggtt_view_type view_type)
|
||||
{
|
||||
memset(view, 0, sizeof(*view));
|
||||
view->gtt.type = view_type;
|
||||
|
||||
if (view_type == I915_GGTT_VIEW_REMAPPED && IS_ALDERLAKE_P(i915))
|
||||
view->gtt.remapped.plane_alignment = SZ_2M / PAGE_SIZE;
|
||||
}
|
||||
|
||||
bool intel_fb_supports_90_270_rotation(const struct intel_framebuffer *fb)
|
||||
@ -769,16 +984,16 @@ int intel_fill_fb_info(struct drm_i915_private *i915, struct intel_framebuffer *
|
||||
int i, num_planes = fb->base.format->num_planes;
|
||||
unsigned int tile_size = intel_tile_size(i915);
|
||||
|
||||
intel_fb_view_init(&fb->normal_view, I915_GGTT_VIEW_NORMAL);
|
||||
intel_fb_view_init(i915, &fb->normal_view, I915_GGTT_VIEW_NORMAL);
|
||||
|
||||
drm_WARN_ON(&i915->drm,
|
||||
intel_fb_supports_90_270_rotation(fb) &&
|
||||
intel_fb_needs_pot_stride_remap(fb));
|
||||
|
||||
if (intel_fb_supports_90_270_rotation(fb))
|
||||
intel_fb_view_init(&fb->rotated_view, I915_GGTT_VIEW_ROTATED);
|
||||
intel_fb_view_init(i915, &fb->rotated_view, I915_GGTT_VIEW_ROTATED);
|
||||
if (intel_fb_needs_pot_stride_remap(fb))
|
||||
intel_fb_view_init(&fb->remapped_view, I915_GGTT_VIEW_REMAPPED);
|
||||
intel_fb_view_init(i915, &fb->remapped_view, I915_GGTT_VIEW_REMAPPED);
|
||||
|
||||
for (i = 0; i < num_planes; i++) {
|
||||
struct fb_plane_view_dims view_dims;
|
||||
@ -856,7 +1071,7 @@ static void intel_plane_remap_gtt(struct intel_plane_state *plane_state)
|
||||
unsigned int src_w, src_h;
|
||||
u32 gtt_offset = 0;
|
||||
|
||||
intel_fb_view_init(&plane_state->view,
|
||||
intel_fb_view_init(i915, &plane_state->view,
|
||||
drm_rotation_90_or_270(rotation) ? I915_GGTT_VIEW_ROTATED :
|
||||
I915_GGTT_VIEW_REMAPPED);
|
||||
|
||||
@ -918,6 +1133,79 @@ void intel_fb_fill_view(const struct intel_framebuffer *fb, unsigned int rotatio
|
||||
*view = fb->normal_view;
|
||||
}
|
||||
|
||||
static
|
||||
u32 intel_fb_max_stride(struct drm_i915_private *dev_priv,
|
||||
u32 pixel_format, u64 modifier)
|
||||
{
|
||||
/*
|
||||
* Arbitrary limit for gen4+ chosen to match the
|
||||
* render engine max stride.
|
||||
*
|
||||
* The new CCS hash mode makes remapping impossible
|
||||
*/
|
||||
if (DISPLAY_VER(dev_priv) < 4 || is_ccs_modifier(modifier) ||
|
||||
intel_modifier_uses_dpt(dev_priv, modifier))
|
||||
return intel_plane_fb_max_stride(dev_priv, pixel_format, modifier);
|
||||
else if (DISPLAY_VER(dev_priv) >= 7)
|
||||
return 256 * 1024;
|
||||
else
|
||||
return 128 * 1024;
|
||||
}
|
||||
|
||||
static u32
|
||||
intel_fb_stride_alignment(const struct drm_framebuffer *fb, int color_plane)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(fb->dev);
|
||||
u32 tile_width;
|
||||
|
||||
if (is_surface_linear(fb, color_plane)) {
|
||||
u32 max_stride = intel_plane_fb_max_stride(dev_priv,
|
||||
fb->format->format,
|
||||
fb->modifier);
|
||||
|
||||
/*
|
||||
* To make remapping with linear generally feasible
|
||||
* we need the stride to be page aligned.
|
||||
*/
|
||||
if (fb->pitches[color_plane] > max_stride &&
|
||||
!is_ccs_modifier(fb->modifier))
|
||||
return intel_tile_size(dev_priv);
|
||||
else
|
||||
return 64;
|
||||
}
|
||||
|
||||
tile_width = intel_tile_width_bytes(fb, color_plane);
|
||||
if (is_ccs_modifier(fb->modifier)) {
|
||||
/*
|
||||
* On ADL-P the stride must be either 8 tiles or a stride
|
||||
* that is aligned to 16 tiles, required by the 16 tiles =
|
||||
* 64 kbyte CCS AUX PTE granularity, allowing CCS FBs to be
|
||||
* remapped.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(dev_priv))
|
||||
tile_width *= fb->pitches[0] <= tile_width * 8 ? 8 : 16;
|
||||
/*
|
||||
* On TGL the surface stride must be 4 tile aligned, mapped by
|
||||
* one 64 byte cacheline on the CCS AUX surface.
|
||||
*/
|
||||
else if (DISPLAY_VER(dev_priv) >= 12)
|
||||
tile_width *= 4;
|
||||
/*
|
||||
* Display WA #0531: skl,bxt,kbl,glk
|
||||
*
|
||||
* Render decompression and plane width > 3840
|
||||
* combined with horizontal panning requires the
|
||||
* plane stride to be a multiple of 4. We'll just
|
||||
* require the entire fb to accommodate that to avoid
|
||||
* potential runtime errors at plane configuration time.
|
||||
*/
|
||||
else if ((DISPLAY_VER(dev_priv) == 9 || IS_GEMINILAKE(dev_priv)) &&
|
||||
color_plane == 0 && fb->width > 3840)
|
||||
tile_width *= 4;
|
||||
}
|
||||
return tile_width;
|
||||
}
|
||||
|
||||
static int intel_plane_check_stride(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
|
||||
@ -981,3 +1269,257 @@ int intel_plane_compute_gtt(struct intel_plane_state *plane_state)
|
||||
|
||||
return intel_plane_check_stride(plane_state);
|
||||
}
|
||||
|
||||
static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb)
|
||||
{
|
||||
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
|
||||
|
||||
drm_framebuffer_cleanup(fb);
|
||||
|
||||
if (intel_fb_uses_dpt(fb))
|
||||
intel_dpt_destroy(intel_fb->dpt_vm);
|
||||
|
||||
intel_frontbuffer_put(intel_fb->frontbuffer);
|
||||
|
||||
kfree(intel_fb);
|
||||
}
|
||||
|
||||
static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
|
||||
struct drm_file *file,
|
||||
unsigned int *handle)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
|
||||
if (i915_gem_object_is_userptr(obj)) {
|
||||
drm_dbg(&i915->drm,
|
||||
"attempting to use a userptr for a framebuffer, denied\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return drm_gem_handle_create(file, &obj->base, handle);
|
||||
}
|
||||
|
||||
static int intel_user_framebuffer_dirty(struct drm_framebuffer *fb,
|
||||
struct drm_file *file,
|
||||
unsigned int flags, unsigned int color,
|
||||
struct drm_clip_rect *clips,
|
||||
unsigned int num_clips)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
|
||||
|
||||
i915_gem_object_flush_if_display(obj);
|
||||
intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct drm_framebuffer_funcs intel_fb_funcs = {
|
||||
.destroy = intel_user_framebuffer_destroy,
|
||||
.create_handle = intel_user_framebuffer_create_handle,
|
||||
.dirty = intel_user_framebuffer_dirty,
|
||||
};
|
||||
|
||||
int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
|
||||
struct drm_i915_gem_object *obj,
|
||||
struct drm_mode_fb_cmd2 *mode_cmd)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
|
||||
struct drm_framebuffer *fb = &intel_fb->base;
|
||||
u32 max_stride;
|
||||
unsigned int tiling, stride;
|
||||
int ret = -EINVAL;
|
||||
int i;
|
||||
|
||||
intel_fb->frontbuffer = intel_frontbuffer_get(obj);
|
||||
if (!intel_fb->frontbuffer)
|
||||
return -ENOMEM;
|
||||
|
||||
i915_gem_object_lock(obj, NULL);
|
||||
tiling = i915_gem_object_get_tiling(obj);
|
||||
stride = i915_gem_object_get_stride(obj);
|
||||
i915_gem_object_unlock(obj);
|
||||
|
||||
if (mode_cmd->flags & DRM_MODE_FB_MODIFIERS) {
|
||||
/*
|
||||
* If there's a fence, enforce that
|
||||
* the fb modifier and tiling mode match.
|
||||
*/
|
||||
if (tiling != I915_TILING_NONE &&
|
||||
tiling != intel_fb_modifier_to_tiling(mode_cmd->modifier[0])) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"tiling_mode doesn't match fb modifier\n");
|
||||
goto err;
|
||||
}
|
||||
} else {
|
||||
if (tiling == I915_TILING_X) {
|
||||
mode_cmd->modifier[0] = I915_FORMAT_MOD_X_TILED;
|
||||
} else if (tiling == I915_TILING_Y) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"No Y tiling for legacy addfb\n");
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
if (!drm_any_plane_has_format(&dev_priv->drm,
|
||||
mode_cmd->pixel_format,
|
||||
mode_cmd->modifier[0])) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"unsupported pixel format %p4cc / modifier 0x%llx\n",
|
||||
&mode_cmd->pixel_format, mode_cmd->modifier[0]);
|
||||
goto err;
|
||||
}
|
||||
|
||||
/*
|
||||
* gen2/3 display engine uses the fence if present,
|
||||
* so the tiling mode must match the fb modifier exactly.
|
||||
*/
|
||||
if (DISPLAY_VER(dev_priv) < 4 &&
|
||||
tiling != intel_fb_modifier_to_tiling(mode_cmd->modifier[0])) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"tiling_mode must match fb modifier exactly on gen2/3\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
max_stride = intel_fb_max_stride(dev_priv, mode_cmd->pixel_format,
|
||||
mode_cmd->modifier[0]);
|
||||
if (mode_cmd->pitches[0] > max_stride) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"%s pitch (%u) must be at most %d\n",
|
||||
mode_cmd->modifier[0] != DRM_FORMAT_MOD_LINEAR ?
|
||||
"tiled" : "linear",
|
||||
mode_cmd->pitches[0], max_stride);
|
||||
goto err;
|
||||
}
|
||||
|
||||
/*
|
||||
* If there's a fence, enforce that
|
||||
* the fb pitch and fence stride match.
|
||||
*/
|
||||
if (tiling != I915_TILING_NONE && mode_cmd->pitches[0] != stride) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"pitch (%d) must match tiling stride (%d)\n",
|
||||
mode_cmd->pitches[0], stride);
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* FIXME need to adjust LINOFF/TILEOFF accordingly. */
|
||||
if (mode_cmd->offsets[0] != 0) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"plane 0 offset (0x%08x) must be 0\n",
|
||||
mode_cmd->offsets[0]);
|
||||
goto err;
|
||||
}
|
||||
|
||||
drm_helper_mode_fill_fb_struct(&dev_priv->drm, fb, mode_cmd);
|
||||
|
||||
for (i = 0; i < fb->format->num_planes; i++) {
|
||||
u32 stride_alignment;
|
||||
|
||||
if (mode_cmd->handles[i] != mode_cmd->handles[0]) {
|
||||
drm_dbg_kms(&dev_priv->drm, "bad plane %d handle\n",
|
||||
i);
|
||||
goto err;
|
||||
}
|
||||
|
||||
stride_alignment = intel_fb_stride_alignment(fb, i);
|
||||
if (fb->pitches[i] & (stride_alignment - 1)) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"plane %d pitch (%d) must be at least %u byte aligned\n",
|
||||
i, fb->pitches[i], stride_alignment);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (is_gen12_ccs_plane(fb, i) && !is_gen12_ccs_cc_plane(fb, i)) {
|
||||
int ccs_aux_stride = gen12_ccs_aux_stride(intel_fb, i);
|
||||
|
||||
if (fb->pitches[i] != ccs_aux_stride) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"ccs aux plane %d pitch (%d) must be %d\n",
|
||||
i,
|
||||
fb->pitches[i], ccs_aux_stride);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
fb->obj[i] = &obj->base;
|
||||
}
|
||||
|
||||
ret = intel_fill_fb_info(dev_priv, intel_fb);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
if (intel_fb_uses_dpt(fb)) {
|
||||
struct i915_address_space *vm;
|
||||
|
||||
vm = intel_dpt_create(intel_fb);
|
||||
if (IS_ERR(vm)) {
|
||||
ret = PTR_ERR(vm);
|
||||
goto err;
|
||||
}
|
||||
|
||||
intel_fb->dpt_vm = vm;
|
||||
}
|
||||
|
||||
ret = drm_framebuffer_init(&dev_priv->drm, fb, &intel_fb_funcs);
|
||||
if (ret) {
|
||||
drm_err(&dev_priv->drm, "framebuffer init failed %d\n", ret);
|
||||
goto err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
intel_frontbuffer_put(intel_fb->frontbuffer);
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct drm_framebuffer *
|
||||
intel_user_framebuffer_create(struct drm_device *dev,
|
||||
struct drm_file *filp,
|
||||
const struct drm_mode_fb_cmd2 *user_mode_cmd)
|
||||
{
|
||||
struct drm_framebuffer *fb;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct drm_mode_fb_cmd2 mode_cmd = *user_mode_cmd;
|
||||
struct drm_i915_private *i915;
|
||||
|
||||
obj = i915_gem_object_lookup(filp, mode_cmd.handles[0]);
|
||||
if (!obj)
|
||||
return ERR_PTR(-ENOENT);
|
||||
|
||||
/* object is backed with LMEM for discrete */
|
||||
i915 = to_i915(obj->base.dev);
|
||||
if (HAS_LMEM(i915) && !i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
|
||||
/* object is "remote", not in local memory */
|
||||
i915_gem_object_put(obj);
|
||||
return ERR_PTR(-EREMOTE);
|
||||
}
|
||||
|
||||
fb = intel_framebuffer_create(obj, &mode_cmd);
|
||||
i915_gem_object_put(obj);
|
||||
|
||||
return fb;
|
||||
}
|
||||
|
||||
struct drm_framebuffer *
|
||||
intel_framebuffer_create(struct drm_i915_gem_object *obj,
|
||||
struct drm_mode_fb_cmd2 *mode_cmd)
|
||||
{
|
||||
struct intel_framebuffer *intel_fb;
|
||||
int ret;
|
||||
|
||||
intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);
|
||||
if (!intel_fb)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ret = intel_framebuffer_init(intel_fb, obj, mode_cmd);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
return &intel_fb->base;
|
||||
|
||||
err:
|
||||
kfree(intel_fb);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
@ -8,10 +8,12 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_device;
|
||||
struct drm_file;
|
||||
struct drm_framebuffer;
|
||||
|
||||
struct drm_i915_gem_object;
|
||||
struct drm_i915_private;
|
||||
|
||||
struct drm_mode_fb_cmd2;
|
||||
struct intel_fb_view;
|
||||
struct intel_framebuffer;
|
||||
struct intel_plane_state;
|
||||
@ -28,10 +30,14 @@ int skl_ccs_to_main_plane(const struct drm_framebuffer *fb, int ccs_plane);
|
||||
int skl_main_to_aux_plane(const struct drm_framebuffer *fb, int main_plane);
|
||||
|
||||
unsigned int intel_tile_size(const struct drm_i915_private *i915);
|
||||
unsigned int intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane);
|
||||
unsigned int intel_tile_height(const struct drm_framebuffer *fb, int color_plane);
|
||||
unsigned int intel_tile_row_size(const struct drm_framebuffer *fb, int color_plane);
|
||||
|
||||
unsigned int intel_fb_align_height(const struct drm_framebuffer *fb,
|
||||
int color_plane, unsigned int height);
|
||||
unsigned int intel_cursor_alignment(const struct drm_i915_private *i915);
|
||||
unsigned int intel_surf_alignment(const struct drm_framebuffer *fb,
|
||||
int color_plane);
|
||||
|
||||
void intel_fb_plane_get_subsampling(int *hsub, int *vsub,
|
||||
const struct drm_framebuffer *fb,
|
||||
@ -53,4 +59,12 @@ void intel_fb_fill_view(const struct intel_framebuffer *fb, unsigned int rotatio
|
||||
struct intel_fb_view *view);
|
||||
int intel_plane_compute_gtt(struct intel_plane_state *plane_state);
|
||||
|
||||
int intel_framebuffer_init(struct intel_framebuffer *ifb,
|
||||
struct drm_i915_gem_object *obj,
|
||||
struct drm_mode_fb_cmd2 *mode_cmd);
|
||||
struct drm_framebuffer *
|
||||
intel_user_framebuffer_create(struct drm_device *dev,
|
||||
struct drm_file *filp,
|
||||
const struct drm_mode_fb_cmd2 *user_mode_cmd);
|
||||
|
||||
#endif /* __INTEL_FB_H__ */
|
||||
|
@ -62,19 +62,84 @@ static void intel_fbc_get_plane_source_size(const struct intel_fbc_state_cache *
|
||||
*height = cache->plane.src_h;
|
||||
}
|
||||
|
||||
static int intel_fbc_calculate_cfb_size(struct drm_i915_private *dev_priv,
|
||||
const struct intel_fbc_state_cache *cache)
|
||||
/* plane stride in pixels */
|
||||
static unsigned int intel_fbc_plane_stride(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
int lines;
|
||||
const struct drm_framebuffer *fb = plane_state->hw.fb;
|
||||
unsigned int stride;
|
||||
|
||||
stride = plane_state->view.color_plane[0].stride;
|
||||
if (!drm_rotation_90_or_270(plane_state->hw.rotation))
|
||||
stride /= fb->format->cpp[0];
|
||||
|
||||
return stride;
|
||||
}
|
||||
|
||||
/* plane stride based cfb stride in bytes, assuming 1:1 compression limit */
|
||||
static unsigned int _intel_fbc_cfb_stride(const struct intel_fbc_state_cache *cache)
|
||||
{
|
||||
unsigned int cpp = 4; /* FBC always 4 bytes per pixel */
|
||||
|
||||
return cache->fb.stride * cpp;
|
||||
}
|
||||
|
||||
/* minimum acceptable cfb stride in bytes, assuming 1:1 compression limit */
|
||||
static unsigned int skl_fbc_min_cfb_stride(struct drm_i915_private *i915,
|
||||
const struct intel_fbc_state_cache *cache)
|
||||
{
|
||||
unsigned int limit = 4; /* 1:4 compression limit is the worst case */
|
||||
unsigned int cpp = 4; /* FBC always 4 bytes per pixel */
|
||||
unsigned int height = 4; /* FBC segment is 4 lines */
|
||||
unsigned int stride;
|
||||
|
||||
/* minimum segment stride we can use */
|
||||
stride = cache->plane.src_w * cpp * height / limit;
|
||||
|
||||
/*
|
||||
* Wa_16011863758: icl+
|
||||
* Avoid some hardware segment address miscalculation.
|
||||
*/
|
||||
if (DISPLAY_VER(i915) >= 11)
|
||||
stride += 64;
|
||||
|
||||
/*
|
||||
* At least some of the platforms require each 4 line segment to
|
||||
* be 512 byte aligned. Just do it always for simplicity.
|
||||
*/
|
||||
stride = ALIGN(stride, 512);
|
||||
|
||||
/* convert back to single line equivalent with 1:1 compression limit */
|
||||
return stride * limit / height;
|
||||
}
|
||||
|
||||
/* properly aligned cfb stride in bytes, assuming 1:1 compression limit */
|
||||
static unsigned int intel_fbc_cfb_stride(struct drm_i915_private *i915,
|
||||
const struct intel_fbc_state_cache *cache)
|
||||
{
|
||||
unsigned int stride = _intel_fbc_cfb_stride(cache);
|
||||
|
||||
/*
|
||||
* At least some of the platforms require each 4 line segment to
|
||||
* be 512 byte aligned. Aligning each line to 512 bytes guarantees
|
||||
* that regardless of the compression limit we choose later.
|
||||
*/
|
||||
if (DISPLAY_VER(i915) >= 9)
|
||||
return max(ALIGN(stride, 512), skl_fbc_min_cfb_stride(i915, cache));
|
||||
else
|
||||
return stride;
|
||||
}
|
||||
|
||||
static unsigned int intel_fbc_cfb_size(struct drm_i915_private *dev_priv,
|
||||
const struct intel_fbc_state_cache *cache)
|
||||
{
|
||||
int lines = cache->plane.src_h;
|
||||
|
||||
intel_fbc_get_plane_source_size(cache, NULL, &lines);
|
||||
if (DISPLAY_VER(dev_priv) == 7)
|
||||
lines = min(lines, 2048);
|
||||
else if (DISPLAY_VER(dev_priv) >= 8)
|
||||
lines = min(lines, 2560);
|
||||
|
||||
/* Hardware needs the full buffer stride, not just the active area. */
|
||||
return lines * cache->fb.stride;
|
||||
return lines * intel_fbc_cfb_stride(dev_priv, cache);
|
||||
}
|
||||
|
||||
static void i8xx_fbc_deactivate(struct drm_i915_private *dev_priv)
|
||||
@ -99,15 +164,13 @@ static void i8xx_fbc_deactivate(struct drm_i915_private *dev_priv)
|
||||
|
||||
static void i8xx_fbc_activate(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct intel_fbc_reg_params *params = &dev_priv->fbc.params;
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
const struct intel_fbc_reg_params *params = &fbc->params;
|
||||
int cfb_pitch;
|
||||
int i;
|
||||
u32 fbc_ctl;
|
||||
|
||||
/* Note: fbc.limit == 1 for i8xx */
|
||||
cfb_pitch = params->cfb_size / FBC_LL_SIZE;
|
||||
if (params->fb.stride < cfb_pitch)
|
||||
cfb_pitch = params->fb.stride;
|
||||
cfb_pitch = params->cfb_stride / fbc->limit;
|
||||
|
||||
/* FBC_CTL wants 32B or 64B units */
|
||||
if (DISPLAY_VER(dev_priv) == 2)
|
||||
@ -150,15 +213,9 @@ static bool i8xx_fbc_is_active(struct drm_i915_private *dev_priv)
|
||||
|
||||
static u32 g4x_dpfc_ctl_limit(struct drm_i915_private *i915)
|
||||
{
|
||||
const struct intel_fbc_reg_params *params = &i915->fbc.params;
|
||||
int limit = i915->fbc.limit;
|
||||
|
||||
if (params->fb.format->cpp[0] == 2)
|
||||
limit <<= 1;
|
||||
|
||||
switch (limit) {
|
||||
switch (i915->fbc.limit) {
|
||||
default:
|
||||
MISSING_CASE(limit);
|
||||
MISSING_CASE(i915->fbc.limit);
|
||||
fallthrough;
|
||||
case 1:
|
||||
return DPFC_CTL_LIMIT_1X;
|
||||
@ -232,16 +289,16 @@ static void i965_fbc_recompress(struct drm_i915_private *dev_priv)
|
||||
/* This function forces a CFB recompression through the nuke operation. */
|
||||
static void snb_fbc_recompress(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
|
||||
trace_intel_fbc_nuke(fbc->crtc);
|
||||
|
||||
intel_de_write(dev_priv, MSG_FBC_REND_STATE, FBC_REND_NUKE);
|
||||
intel_de_posting_read(dev_priv, MSG_FBC_REND_STATE);
|
||||
}
|
||||
|
||||
static void intel_fbc_recompress(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
|
||||
trace_intel_fbc_nuke(fbc->crtc);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 6)
|
||||
snb_fbc_recompress(dev_priv);
|
||||
else if (DISPLAY_VER(dev_priv) >= 4)
|
||||
@ -280,8 +337,6 @@ static void ilk_fbc_activate(struct drm_i915_private *dev_priv)
|
||||
params->fence_y_offset);
|
||||
/* enable it... */
|
||||
intel_de_write(dev_priv, ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
|
||||
|
||||
intel_fbc_recompress(dev_priv);
|
||||
}
|
||||
|
||||
static void ilk_fbc_deactivate(struct drm_i915_private *dev_priv)
|
||||
@ -303,19 +358,29 @@ static bool ilk_fbc_is_active(struct drm_i915_private *dev_priv)
|
||||
|
||||
static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct intel_fbc_reg_params *params = &dev_priv->fbc.params;
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
const struct intel_fbc_reg_params *params = &fbc->params;
|
||||
u32 dpfc_ctl;
|
||||
|
||||
/* Display WA #0529: skl, kbl, bxt. */
|
||||
if (DISPLAY_VER(dev_priv) == 9) {
|
||||
u32 val = intel_de_read(dev_priv, CHICKEN_MISC_4);
|
||||
if (DISPLAY_VER(dev_priv) >= 10) {
|
||||
u32 val = 0;
|
||||
|
||||
val &= ~(FBC_STRIDE_OVERRIDE | FBC_STRIDE_MASK);
|
||||
if (params->override_cfb_stride)
|
||||
val |= FBC_STRIDE_OVERRIDE |
|
||||
FBC_STRIDE(params->override_cfb_stride / fbc->limit);
|
||||
|
||||
if (params->gen9_wa_cfb_stride)
|
||||
val |= FBC_STRIDE_OVERRIDE | params->gen9_wa_cfb_stride;
|
||||
intel_de_write(dev_priv, GLK_FBC_STRIDE, val);
|
||||
} else if (DISPLAY_VER(dev_priv) == 9) {
|
||||
u32 val = 0;
|
||||
|
||||
intel_de_write(dev_priv, CHICKEN_MISC_4, val);
|
||||
/* Display WA #0529: skl, kbl, bxt. */
|
||||
if (params->override_cfb_stride)
|
||||
val |= CHICKEN_FBC_STRIDE_OVERRIDE |
|
||||
CHICKEN_FBC_STRIDE(params->override_cfb_stride / fbc->limit);
|
||||
|
||||
intel_de_rmw(dev_priv, CHICKEN_MISC_4,
|
||||
CHICKEN_FBC_STRIDE_OVERRIDE |
|
||||
CHICKEN_FBC_STRIDE_MASK, val);
|
||||
}
|
||||
|
||||
dpfc_ctl = 0;
|
||||
@ -339,8 +404,6 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
|
||||
dpfc_ctl |= FBC_CTL_FALSE_COLOR;
|
||||
|
||||
intel_de_write(dev_priv, ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
|
||||
|
||||
intel_fbc_recompress(dev_priv);
|
||||
}
|
||||
|
||||
static bool intel_fbc_hw_is_active(struct drm_i915_private *dev_priv)
|
||||
@ -402,6 +465,12 @@ bool intel_fbc_is_active(struct drm_i915_private *dev_priv)
|
||||
return dev_priv->fbc.active;
|
||||
}
|
||||
|
||||
static void intel_fbc_activate(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
intel_fbc_hw_activate(dev_priv);
|
||||
intel_fbc_recompress(dev_priv);
|
||||
}
|
||||
|
||||
static void intel_fbc_deactivate(struct drm_i915_private *dev_priv,
|
||||
const char *reason)
|
||||
{
|
||||
@ -440,30 +509,32 @@ static u64 intel_fbc_stolen_end(struct drm_i915_private *dev_priv)
|
||||
return min(end, intel_fbc_cfb_base_max(dev_priv));
|
||||
}
|
||||
|
||||
static int intel_fbc_max_limit(struct drm_i915_private *dev_priv, int fb_cpp)
|
||||
static int intel_fbc_min_limit(int fb_cpp)
|
||||
{
|
||||
/*
|
||||
* FIXME: FBC1 can have arbitrary cfb stride,
|
||||
* so we could support different compression ratios.
|
||||
*/
|
||||
if (DISPLAY_VER(dev_priv) < 5 && !IS_G4X(dev_priv))
|
||||
return 1;
|
||||
return fb_cpp == 2 ? 2 : 1;
|
||||
}
|
||||
|
||||
static int intel_fbc_max_limit(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
/* WaFbcOnly1to1Ratio:ctg */
|
||||
if (IS_G4X(dev_priv))
|
||||
return 1;
|
||||
|
||||
/* FBC2 can only do 1:1, 1:2, 1:4 */
|
||||
return fb_cpp == 2 ? 2 : 4;
|
||||
/*
|
||||
* FBC2 can only do 1:1, 1:2, 1:4, we limit
|
||||
* FBC1 to the same out of convenience.
|
||||
*/
|
||||
return 4;
|
||||
}
|
||||
|
||||
static int find_compression_limit(struct drm_i915_private *dev_priv,
|
||||
unsigned int size,
|
||||
unsigned int fb_cpp)
|
||||
unsigned int size, int min_limit)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
u64 end = intel_fbc_stolen_end(dev_priv);
|
||||
int ret, limit = 1;
|
||||
int ret, limit = min_limit;
|
||||
|
||||
size /= limit;
|
||||
|
||||
/* Try to over-allocate to reduce reallocations and fragmentation. */
|
||||
ret = i915_gem_stolen_insert_node_in_range(dev_priv, &fbc->compressed_fb,
|
||||
@ -471,7 +542,7 @@ static int find_compression_limit(struct drm_i915_private *dev_priv,
|
||||
if (ret == 0)
|
||||
return limit;
|
||||
|
||||
for (; limit <= intel_fbc_max_limit(dev_priv, fb_cpp); limit <<= 1) {
|
||||
for (; limit <= intel_fbc_max_limit(dev_priv); limit <<= 1) {
|
||||
ret = i915_gem_stolen_insert_node_in_range(dev_priv, &fbc->compressed_fb,
|
||||
size >>= 1, 4096, 0, end);
|
||||
if (ret == 0)
|
||||
@ -482,7 +553,7 @@ static int find_compression_limit(struct drm_i915_private *dev_priv,
|
||||
}
|
||||
|
||||
static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
|
||||
unsigned int size, unsigned int fb_cpp)
|
||||
unsigned int size, int min_limit)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
int ret;
|
||||
@ -499,13 +570,12 @@ static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = find_compression_limit(dev_priv, size, fb_cpp);
|
||||
ret = find_compression_limit(dev_priv, size, min_limit);
|
||||
if (!ret)
|
||||
goto err_llb;
|
||||
else if (ret > 1) {
|
||||
else if (ret > min_limit)
|
||||
drm_info_once(&dev_priv->drm,
|
||||
"Reducing the compressed framebuffer size. This may lead to less power savings than a non-reduced-size. Try to increase stolen memory size if available in BIOS.\n");
|
||||
}
|
||||
|
||||
fbc->limit = ret;
|
||||
|
||||
@ -675,11 +745,10 @@ static bool tiling_is_valid(struct drm_i915_private *dev_priv,
|
||||
{
|
||||
switch (modifier) {
|
||||
case DRM_FORMAT_MOD_LINEAR:
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
return true;
|
||||
return false;
|
||||
case I915_FORMAT_MOD_X_TILED:
|
||||
case I915_FORMAT_MOD_Y_TILED:
|
||||
case I915_FORMAT_MOD_Yf_TILED:
|
||||
return DISPLAY_VER(dev_priv) >= 9;
|
||||
case I915_FORMAT_MOD_X_TILED:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
@ -718,11 +787,7 @@ static void intel_fbc_update_state_cache(struct intel_crtc *crtc,
|
||||
|
||||
cache->fb.format = fb->format;
|
||||
cache->fb.modifier = fb->modifier;
|
||||
|
||||
/* FIXME is this correct? */
|
||||
cache->fb.stride = plane_state->view.color_plane[0].stride;
|
||||
if (drm_rotation_90_or_270(plane_state->hw.rotation))
|
||||
cache->fb.stride *= fb->format->cpp[0];
|
||||
cache->fb.stride = intel_fbc_plane_stride(plane_state);
|
||||
|
||||
/* FBC1 compression interval: arbitrary choice of 1 second */
|
||||
cache->interval = drm_mode_vrefresh(&crtc_state->hw.adjusted_mode);
|
||||
@ -745,27 +810,29 @@ static bool intel_fbc_cfb_size_changed(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
|
||||
return intel_fbc_calculate_cfb_size(dev_priv, &fbc->state_cache) >
|
||||
return intel_fbc_cfb_size(dev_priv, &fbc->state_cache) >
|
||||
fbc->compressed_fb.size * fbc->limit;
|
||||
}
|
||||
|
||||
static u16 intel_fbc_gen9_wa_cfb_stride(struct drm_i915_private *dev_priv)
|
||||
static u16 intel_fbc_override_cfb_stride(struct drm_i915_private *dev_priv,
|
||||
const struct intel_fbc_state_cache *cache)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
struct intel_fbc_state_cache *cache = &fbc->state_cache;
|
||||
unsigned int stride = _intel_fbc_cfb_stride(cache);
|
||||
unsigned int stride_aligned = intel_fbc_cfb_stride(dev_priv, cache);
|
||||
|
||||
if ((DISPLAY_VER(dev_priv) == 9) &&
|
||||
cache->fb.modifier != I915_FORMAT_MOD_X_TILED)
|
||||
return DIV_ROUND_UP(cache->plane.src_w, 32 * fbc->limit) * 8;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
/*
|
||||
* Override stride in 64 byte units per 4 line segment.
|
||||
*
|
||||
* Gen9 hw miscalculates cfb stride for linear as
|
||||
* PLANE_STRIDE*512 instead of PLANE_STRIDE*64, so
|
||||
* we always need to use the override there.
|
||||
*/
|
||||
if (stride != stride_aligned ||
|
||||
(DISPLAY_VER(dev_priv) == 9 &&
|
||||
cache->fb.modifier == DRM_FORMAT_MOD_LINEAR))
|
||||
return stride_aligned * 4 / 64;
|
||||
|
||||
static bool intel_fbc_gen9_wa_cfb_stride_changed(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
|
||||
return fbc->params.gen9_wa_cfb_stride != intel_fbc_gen9_wa_cfb_stride(dev_priv);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool intel_fbc_can_enable(struct drm_i915_private *dev_priv)
|
||||
@ -860,7 +927,8 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!stride_is_valid(dev_priv, cache->fb.modifier, cache->fb.stride)) {
|
||||
if (!stride_is_valid(dev_priv, cache->fb.modifier,
|
||||
cache->fb.stride * cache->fb.format->cpp[0])) {
|
||||
fbc->no_fbc_reason = "framebuffer stride not supported";
|
||||
return false;
|
||||
}
|
||||
@ -948,9 +1016,9 @@ static void intel_fbc_get_reg_params(struct intel_crtc *crtc,
|
||||
params->fb.modifier = cache->fb.modifier;
|
||||
params->fb.stride = cache->fb.stride;
|
||||
|
||||
params->cfb_size = intel_fbc_calculate_cfb_size(dev_priv, cache);
|
||||
|
||||
params->gen9_wa_cfb_stride = cache->gen9_wa_cfb_stride;
|
||||
params->cfb_stride = intel_fbc_cfb_stride(dev_priv, cache);
|
||||
params->cfb_size = intel_fbc_cfb_size(dev_priv, cache);
|
||||
params->override_cfb_stride = intel_fbc_override_cfb_stride(dev_priv, cache);
|
||||
|
||||
params->plane_visible = cache->plane.visible;
|
||||
}
|
||||
@ -981,10 +1049,13 @@ static bool intel_fbc_can_flip_nuke(const struct intel_crtc_state *crtc_state)
|
||||
if (params->fb.stride != cache->fb.stride)
|
||||
return false;
|
||||
|
||||
if (params->cfb_size != intel_fbc_calculate_cfb_size(dev_priv, cache))
|
||||
if (params->cfb_stride != intel_fbc_cfb_stride(dev_priv, cache))
|
||||
return false;
|
||||
|
||||
if (params->gen9_wa_cfb_stride != cache->gen9_wa_cfb_stride)
|
||||
if (params->cfb_size != intel_fbc_cfb_size(dev_priv, cache))
|
||||
return false;
|
||||
|
||||
if (params->override_cfb_stride != intel_fbc_override_cfb_stride(dev_priv, cache))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
@ -1090,7 +1161,7 @@ static void __intel_fbc_post_update(struct intel_crtc *crtc)
|
||||
return;
|
||||
|
||||
if (!fbc->busy_bits)
|
||||
intel_fbc_hw_activate(dev_priv);
|
||||
intel_fbc_activate(dev_priv);
|
||||
else
|
||||
intel_fbc_deactivate(dev_priv, "frontbuffer write");
|
||||
}
|
||||
@ -1129,7 +1200,7 @@ void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
|
||||
if (!HAS_FBC(dev_priv))
|
||||
return;
|
||||
|
||||
if (origin == ORIGIN_GTT || origin == ORIGIN_FLIP)
|
||||
if (origin == ORIGIN_FLIP || origin == ORIGIN_CURSOR_UPDATE)
|
||||
return;
|
||||
|
||||
mutex_lock(&fbc->lock);
|
||||
@ -1150,19 +1221,11 @@ void intel_fbc_flush(struct drm_i915_private *dev_priv,
|
||||
if (!HAS_FBC(dev_priv))
|
||||
return;
|
||||
|
||||
/*
|
||||
* GTT tracking does not nuke the entire cfb
|
||||
* so don't clear busy_bits set for some other
|
||||
* reason.
|
||||
*/
|
||||
if (origin == ORIGIN_GTT)
|
||||
return;
|
||||
|
||||
mutex_lock(&fbc->lock);
|
||||
|
||||
fbc->busy_bits &= ~frontbuffer_bits;
|
||||
|
||||
if (origin == ORIGIN_FLIP)
|
||||
if (origin == ORIGIN_FLIP || origin == ORIGIN_CURSOR_UPDATE)
|
||||
goto out;
|
||||
|
||||
if (!fbc->busy_bits && fbc->crtc &&
|
||||
@ -1246,8 +1309,8 @@ out:
|
||||
* intel_fbc_enable multiple times for the same pipe without an
|
||||
* intel_fbc_disable in the middle, as long as it is deactivated.
|
||||
*/
|
||||
void intel_fbc_enable(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
static void intel_fbc_enable(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
|
||||
@ -1257,16 +1320,22 @@ void intel_fbc_enable(struct intel_atomic_state *state,
|
||||
intel_atomic_get_new_plane_state(state, plane);
|
||||
struct intel_fbc *fbc = &dev_priv->fbc;
|
||||
struct intel_fbc_state_cache *cache = &fbc->state_cache;
|
||||
int min_limit;
|
||||
|
||||
if (!plane->has_fbc || !plane_state)
|
||||
return;
|
||||
|
||||
min_limit = intel_fbc_min_limit(plane_state->hw.fb ?
|
||||
plane_state->hw.fb->format->cpp[0] : 0);
|
||||
|
||||
mutex_lock(&fbc->lock);
|
||||
|
||||
if (fbc->crtc) {
|
||||
if (fbc->crtc != crtc ||
|
||||
(!intel_fbc_cfb_size_changed(dev_priv) &&
|
||||
!intel_fbc_gen9_wa_cfb_stride_changed(dev_priv)))
|
||||
if (fbc->crtc != crtc)
|
||||
goto out;
|
||||
|
||||
if (fbc->limit >= min_limit &&
|
||||
!intel_fbc_cfb_size_changed(dev_priv))
|
||||
goto out;
|
||||
|
||||
__intel_fbc_disable(dev_priv);
|
||||
@ -1281,15 +1350,12 @@ void intel_fbc_enable(struct intel_atomic_state *state,
|
||||
goto out;
|
||||
|
||||
if (intel_fbc_alloc_cfb(dev_priv,
|
||||
intel_fbc_calculate_cfb_size(dev_priv, cache),
|
||||
plane_state->hw.fb->format->cpp[0])) {
|
||||
intel_fbc_cfb_size(dev_priv, cache), min_limit)) {
|
||||
cache->plane.visible = false;
|
||||
fbc->no_fbc_reason = "not enough stolen memory";
|
||||
goto out;
|
||||
}
|
||||
|
||||
cache->gen9_wa_cfb_stride = intel_fbc_gen9_wa_cfb_stride(dev_priv);
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "Enabling FBC on pipe %c\n",
|
||||
pipe_name(crtc->pipe));
|
||||
fbc->no_fbc_reason = "FBC enabled but not active yet\n";
|
||||
@ -1322,6 +1388,28 @@ void intel_fbc_disable(struct intel_crtc *crtc)
|
||||
mutex_unlock(&fbc->lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_fbc_update: enable/disable FBC on the CRTC
|
||||
* @state: atomic state
|
||||
* @crtc: the CRTC
|
||||
*
|
||||
* This function checks if the given CRTC was chosen for FBC, then enables it if
|
||||
* possible. Notice that it doesn't activate FBC. It is valid to call
|
||||
* intel_fbc_update multiple times for the same pipe without an
|
||||
* intel_fbc_disable in the middle.
|
||||
*/
|
||||
void intel_fbc_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
if (crtc_state->update_pipe && !crtc_state->enable_fbc)
|
||||
intel_fbc_disable(crtc);
|
||||
else
|
||||
intel_fbc_enable(state, crtc);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_fbc_global_disable - globally disable FBC
|
||||
* @dev_priv: i915 device instance
|
||||
|
@ -24,7 +24,7 @@ bool intel_fbc_pre_update(struct intel_atomic_state *state,
|
||||
void intel_fbc_post_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_fbc_init(struct drm_i915_private *dev_priv);
|
||||
void intel_fbc_enable(struct intel_atomic_state *state,
|
||||
void intel_fbc_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_fbc_disable(struct intel_crtc *crtc);
|
||||
void intel_fbc_global_disable(struct drm_i915_private *dev_priv);
|
||||
|
@ -45,6 +45,7 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_fb.h"
|
||||
#include "intel_fbdev.h"
|
||||
#include "intel_frontbuffer.h"
|
||||
|
||||
@ -229,8 +230,6 @@ static int intelfb_create(struct drm_fb_helper *helper,
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
intel_frontbuffer_flush(to_frontbuffer(ifbdev), ORIGIN_DIRTYFB);
|
||||
|
||||
info = drm_fb_helper_alloc_fbi(helper);
|
||||
if (IS_ERR(info)) {
|
||||
drm_err(&dev_priv->drm, "Failed to allocate fb_info\n");
|
||||
|
@ -2,11 +2,112 @@
|
||||
/*
|
||||
* Copyright © 2020 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_ddi.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_fdi.h"
|
||||
#include "intel_sideband.h"
|
||||
|
||||
static void assert_fdi_tx(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, bool state)
|
||||
{
|
||||
bool cur_state;
|
||||
|
||||
if (HAS_DDI(dev_priv)) {
|
||||
/*
|
||||
* DDI does not have a specific FDI_TX register.
|
||||
*
|
||||
* FDI is never fed from EDP transcoder
|
||||
* so pipe->transcoder cast is fine here.
|
||||
*/
|
||||
enum transcoder cpu_transcoder = (enum transcoder)pipe;
|
||||
cur_state = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)) & TRANS_DDI_FUNC_ENABLE;
|
||||
} else {
|
||||
cur_state = intel_de_read(dev_priv, FDI_TX_CTL(pipe)) & FDI_TX_ENABLE;
|
||||
}
|
||||
I915_STATE_WARN(cur_state != state,
|
||||
"FDI TX state assertion failure (expected %s, current %s)\n",
|
||||
onoff(state), onoff(cur_state));
|
||||
}
|
||||
|
||||
void assert_fdi_tx_enabled(struct drm_i915_private *i915, enum pipe pipe)
|
||||
{
|
||||
assert_fdi_tx(i915, pipe, true);
|
||||
}
|
||||
|
||||
void assert_fdi_tx_disabled(struct drm_i915_private *i915, enum pipe pipe)
|
||||
{
|
||||
assert_fdi_tx(i915, pipe, false);
|
||||
}
|
||||
|
||||
static void assert_fdi_rx(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, bool state)
|
||||
{
|
||||
bool cur_state;
|
||||
|
||||
cur_state = intel_de_read(dev_priv, FDI_RX_CTL(pipe)) & FDI_RX_ENABLE;
|
||||
I915_STATE_WARN(cur_state != state,
|
||||
"FDI RX state assertion failure (expected %s, current %s)\n",
|
||||
onoff(state), onoff(cur_state));
|
||||
}
|
||||
|
||||
void assert_fdi_rx_enabled(struct drm_i915_private *i915, enum pipe pipe)
|
||||
{
|
||||
assert_fdi_rx(i915, pipe, true);
|
||||
}
|
||||
|
||||
void assert_fdi_rx_disabled(struct drm_i915_private *i915, enum pipe pipe)
|
||||
{
|
||||
assert_fdi_rx(i915, pipe, false);
|
||||
}
|
||||
|
||||
void assert_fdi_tx_pll_enabled(struct drm_i915_private *i915,
|
||||
enum pipe pipe)
|
||||
{
|
||||
bool cur_state;
|
||||
|
||||
/* ILK FDI PLL is always enabled */
|
||||
if (IS_IRONLAKE(i915))
|
||||
return;
|
||||
|
||||
/* On Haswell, DDI ports are responsible for the FDI PLL setup */
|
||||
if (HAS_DDI(i915))
|
||||
return;
|
||||
|
||||
cur_state = intel_de_read(i915, FDI_TX_CTL(pipe)) & FDI_TX_PLL_ENABLE;
|
||||
I915_STATE_WARN(!cur_state, "FDI TX PLL assertion failure, should be active but is disabled\n");
|
||||
}
|
||||
|
||||
static void assert_fdi_rx_pll(struct drm_i915_private *i915,
|
||||
enum pipe pipe, bool state)
|
||||
{
|
||||
bool cur_state;
|
||||
|
||||
cur_state = intel_de_read(i915, FDI_RX_CTL(pipe)) & FDI_RX_PLL_ENABLE;
|
||||
I915_STATE_WARN(cur_state != state,
|
||||
"FDI RX PLL assertion failure (expected %s, current %s)\n",
|
||||
onoff(state), onoff(cur_state));
|
||||
}
|
||||
|
||||
void assert_fdi_rx_pll_enabled(struct drm_i915_private *i915, enum pipe pipe)
|
||||
{
|
||||
assert_fdi_rx_pll(i915, pipe, true);
|
||||
}
|
||||
|
||||
void assert_fdi_rx_pll_disabled(struct drm_i915_private *i915, enum pipe pipe)
|
||||
{
|
||||
assert_fdi_rx_pll(i915, pipe, false);
|
||||
}
|
||||
|
||||
void intel_fdi_link_train(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
|
||||
dev_priv->fdi_funcs->fdi_link_train(crtc, crtc_state);
|
||||
}
|
||||
|
||||
/* units of 100MHz */
|
||||
static int pipe_required_fdi_lanes(struct intel_crtc_state *crtc_state)
|
||||
@ -91,10 +192,36 @@ static int ilk_check_fdi_lanes(struct drm_device *dev, enum pipe pipe,
|
||||
}
|
||||
return 0;
|
||||
default:
|
||||
BUG();
|
||||
MISSING_CASE(pipe);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
void intel_fdi_pll_freq_update(struct drm_i915_private *i915)
|
||||
{
|
||||
if (IS_IRONLAKE(i915)) {
|
||||
u32 fdi_pll_clk =
|
||||
intel_de_read(i915, FDI_PLL_BIOS_0) & FDI_PLL_FB_CLOCK_MASK;
|
||||
|
||||
i915->fdi_pll_freq = (fdi_pll_clk + 2) * 10000;
|
||||
} else if (IS_SANDYBRIDGE(i915) || IS_IVYBRIDGE(i915)) {
|
||||
i915->fdi_pll_freq = 270000;
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
|
||||
drm_dbg(&i915->drm, "FDI PLL freq=%d\n", i915->fdi_pll_freq);
|
||||
}
|
||||
|
||||
int intel_fdi_link_freq(struct drm_i915_private *i915,
|
||||
const struct intel_crtc_state *pipe_config)
|
||||
{
|
||||
if (HAS_DDI(i915))
|
||||
return pipe_config->port_clock; /* SPLL */
|
||||
else
|
||||
return i915->fdi_pll_freq;
|
||||
}
|
||||
|
||||
int ilk_fdi_compute_config(struct intel_crtc *crtc,
|
||||
struct intel_crtc_state *pipe_config)
|
||||
{
|
||||
@ -140,11 +267,60 @@ retry:
|
||||
}
|
||||
|
||||
if (needs_recompute)
|
||||
return I915_DISPLAY_CONFIG_RETRY;
|
||||
return -EAGAIN;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void cpt_set_fdi_bc_bifurcation(struct drm_i915_private *dev_priv, bool enable)
|
||||
{
|
||||
u32 temp;
|
||||
|
||||
temp = intel_de_read(dev_priv, SOUTH_CHICKEN1);
|
||||
if (!!(temp & FDI_BC_BIFURCATION_SELECT) == enable)
|
||||
return;
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm,
|
||||
intel_de_read(dev_priv, FDI_RX_CTL(PIPE_B)) &
|
||||
FDI_RX_ENABLE);
|
||||
drm_WARN_ON(&dev_priv->drm,
|
||||
intel_de_read(dev_priv, FDI_RX_CTL(PIPE_C)) &
|
||||
FDI_RX_ENABLE);
|
||||
|
||||
temp &= ~FDI_BC_BIFURCATION_SELECT;
|
||||
if (enable)
|
||||
temp |= FDI_BC_BIFURCATION_SELECT;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "%sabling fdi C rx\n",
|
||||
enable ? "en" : "dis");
|
||||
intel_de_write(dev_priv, SOUTH_CHICKEN1, temp);
|
||||
intel_de_posting_read(dev_priv, SOUTH_CHICKEN1);
|
||||
}
|
||||
|
||||
static void ivb_update_fdi_bc_bifurcation(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
|
||||
switch (crtc->pipe) {
|
||||
case PIPE_A:
|
||||
break;
|
||||
case PIPE_B:
|
||||
if (crtc_state->fdi_lanes > 2)
|
||||
cpt_set_fdi_bc_bifurcation(dev_priv, false);
|
||||
else
|
||||
cpt_set_fdi_bc_bifurcation(dev_priv, true);
|
||||
|
||||
break;
|
||||
case PIPE_C:
|
||||
cpt_set_fdi_bc_bifurcation(dev_priv, true);
|
||||
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(crtc->pipe);
|
||||
}
|
||||
}
|
||||
|
||||
void intel_fdi_normal_train(struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_device *dev = crtc->base.dev;
|
||||
@ -196,8 +372,15 @@ static void ilk_fdi_link_train(struct intel_crtc *crtc,
|
||||
i915_reg_t reg;
|
||||
u32 temp, tries;
|
||||
|
||||
/*
|
||||
* Write the TU size bits before fdi link training, so that error
|
||||
* detection works.
|
||||
*/
|
||||
intel_de_write(dev_priv, FDI_RX_TUSIZE1(pipe),
|
||||
intel_de_read(dev_priv, PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
|
||||
|
||||
/* FDI needs bits from pipe first */
|
||||
assert_pipe_enabled(dev_priv, crtc_state->cpu_transcoder);
|
||||
assert_transcoder_enabled(dev_priv, crtc_state->cpu_transcoder);
|
||||
|
||||
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
|
||||
for train result */
|
||||
@ -299,6 +482,13 @@ static void gen6_fdi_link_train(struct intel_crtc *crtc,
|
||||
i915_reg_t reg;
|
||||
u32 temp, i, retry;
|
||||
|
||||
/*
|
||||
* Write the TU size bits before fdi link training, so that error
|
||||
* detection works.
|
||||
*/
|
||||
intel_de_write(dev_priv, FDI_RX_TUSIZE1(pipe),
|
||||
intel_de_read(dev_priv, PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
|
||||
|
||||
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
|
||||
for train result */
|
||||
reg = FDI_RX_IMR(pipe);
|
||||
@ -436,6 +626,15 @@ static void ivb_manual_fdi_link_train(struct intel_crtc *crtc,
|
||||
i915_reg_t reg;
|
||||
u32 temp, i, j;
|
||||
|
||||
ivb_update_fdi_bc_bifurcation(crtc_state);
|
||||
|
||||
/*
|
||||
* Write the TU size bits before fdi link training, so that error
|
||||
* detection works.
|
||||
*/
|
||||
intel_de_write(dev_priv, FDI_RX_TUSIZE1(pipe),
|
||||
intel_de_read(dev_priv, PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
|
||||
|
||||
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
|
||||
for train result */
|
||||
reg = FDI_RX_IMR(pipe);
|
||||
@ -807,15 +1006,125 @@ void ilk_fdi_disable(struct intel_crtc *crtc)
|
||||
udelay(100);
|
||||
}
|
||||
|
||||
static void lpt_fdi_reset_mphy(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
u32 tmp;
|
||||
|
||||
tmp = intel_de_read(dev_priv, SOUTH_CHICKEN2);
|
||||
tmp |= FDI_MPHY_IOSFSB_RESET_CTL;
|
||||
intel_de_write(dev_priv, SOUTH_CHICKEN2, tmp);
|
||||
|
||||
if (wait_for_us(intel_de_read(dev_priv, SOUTH_CHICKEN2) &
|
||||
FDI_MPHY_IOSFSB_RESET_STATUS, 100))
|
||||
drm_err(&dev_priv->drm, "FDI mPHY reset assert timeout\n");
|
||||
|
||||
tmp = intel_de_read(dev_priv, SOUTH_CHICKEN2);
|
||||
tmp &= ~FDI_MPHY_IOSFSB_RESET_CTL;
|
||||
intel_de_write(dev_priv, SOUTH_CHICKEN2, tmp);
|
||||
|
||||
if (wait_for_us((intel_de_read(dev_priv, SOUTH_CHICKEN2) &
|
||||
FDI_MPHY_IOSFSB_RESET_STATUS) == 0, 100))
|
||||
drm_err(&dev_priv->drm, "FDI mPHY reset de-assert timeout\n");
|
||||
}
|
||||
|
||||
/* WaMPhyProgramming:hsw */
|
||||
void lpt_fdi_program_mphy(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
u32 tmp;
|
||||
|
||||
lpt_fdi_reset_mphy(dev_priv);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x8008, SBI_MPHY);
|
||||
tmp &= ~(0xFF << 24);
|
||||
tmp |= (0x12 << 24);
|
||||
intel_sbi_write(dev_priv, 0x8008, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x2008, SBI_MPHY);
|
||||
tmp |= (1 << 11);
|
||||
intel_sbi_write(dev_priv, 0x2008, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x2108, SBI_MPHY);
|
||||
tmp |= (1 << 11);
|
||||
intel_sbi_write(dev_priv, 0x2108, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x206C, SBI_MPHY);
|
||||
tmp |= (1 << 24) | (1 << 21) | (1 << 18);
|
||||
intel_sbi_write(dev_priv, 0x206C, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x216C, SBI_MPHY);
|
||||
tmp |= (1 << 24) | (1 << 21) | (1 << 18);
|
||||
intel_sbi_write(dev_priv, 0x216C, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x2080, SBI_MPHY);
|
||||
tmp &= ~(7 << 13);
|
||||
tmp |= (5 << 13);
|
||||
intel_sbi_write(dev_priv, 0x2080, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x2180, SBI_MPHY);
|
||||
tmp &= ~(7 << 13);
|
||||
tmp |= (5 << 13);
|
||||
intel_sbi_write(dev_priv, 0x2180, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x208C, SBI_MPHY);
|
||||
tmp &= ~0xFF;
|
||||
tmp |= 0x1C;
|
||||
intel_sbi_write(dev_priv, 0x208C, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x218C, SBI_MPHY);
|
||||
tmp &= ~0xFF;
|
||||
tmp |= 0x1C;
|
||||
intel_sbi_write(dev_priv, 0x218C, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x2098, SBI_MPHY);
|
||||
tmp &= ~(0xFF << 16);
|
||||
tmp |= (0x1C << 16);
|
||||
intel_sbi_write(dev_priv, 0x2098, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x2198, SBI_MPHY);
|
||||
tmp &= ~(0xFF << 16);
|
||||
tmp |= (0x1C << 16);
|
||||
intel_sbi_write(dev_priv, 0x2198, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x20C4, SBI_MPHY);
|
||||
tmp |= (1 << 27);
|
||||
intel_sbi_write(dev_priv, 0x20C4, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x21C4, SBI_MPHY);
|
||||
tmp |= (1 << 27);
|
||||
intel_sbi_write(dev_priv, 0x21C4, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x20EC, SBI_MPHY);
|
||||
tmp &= ~(0xF << 28);
|
||||
tmp |= (4 << 28);
|
||||
intel_sbi_write(dev_priv, 0x20EC, tmp, SBI_MPHY);
|
||||
|
||||
tmp = intel_sbi_read(dev_priv, 0x21EC, SBI_MPHY);
|
||||
tmp &= ~(0xF << 28);
|
||||
tmp |= (4 << 28);
|
||||
intel_sbi_write(dev_priv, 0x21EC, tmp, SBI_MPHY);
|
||||
}
|
||||
|
||||
static const struct intel_fdi_funcs ilk_funcs = {
|
||||
.fdi_link_train = ilk_fdi_link_train,
|
||||
};
|
||||
|
||||
static const struct intel_fdi_funcs gen6_funcs = {
|
||||
.fdi_link_train = gen6_fdi_link_train,
|
||||
};
|
||||
|
||||
static const struct intel_fdi_funcs ivb_funcs = {
|
||||
.fdi_link_train = ivb_manual_fdi_link_train,
|
||||
};
|
||||
|
||||
void
|
||||
intel_fdi_init_hook(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (IS_IRONLAKE(dev_priv)) {
|
||||
dev_priv->display.fdi_link_train = ilk_fdi_link_train;
|
||||
dev_priv->fdi_funcs = &ilk_funcs;
|
||||
} else if (IS_SANDYBRIDGE(dev_priv)) {
|
||||
dev_priv->display.fdi_link_train = gen6_fdi_link_train;
|
||||
dev_priv->fdi_funcs = &gen6_funcs;
|
||||
} else if (IS_IVYBRIDGE(dev_priv)) {
|
||||
/* FIXME: detect B0+ stepping and use auto training */
|
||||
dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train;
|
||||
dev_priv->fdi_funcs = &ivb_funcs;
|
||||
}
|
||||
}
|
||||
|
@ -6,12 +6,14 @@
|
||||
#ifndef _INTEL_FDI_H_
|
||||
#define _INTEL_FDI_H_
|
||||
|
||||
enum pipe;
|
||||
struct drm_i915_private;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_encoder;
|
||||
|
||||
#define I915_DISPLAY_CONFIG_RETRY 1
|
||||
int intel_fdi_link_freq(struct drm_i915_private *i915,
|
||||
const struct intel_crtc_state *pipe_config);
|
||||
int ilk_fdi_compute_config(struct intel_crtc *intel_crtc,
|
||||
struct intel_crtc_state *pipe_config);
|
||||
void intel_fdi_normal_train(struct intel_crtc *crtc);
|
||||
@ -21,5 +23,18 @@ void ilk_fdi_pll_enable(const struct intel_crtc_state *crtc_state);
|
||||
void intel_fdi_init_hook(struct drm_i915_private *dev_priv);
|
||||
void hsw_fdi_link_train(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_fdi_pll_freq_update(struct drm_i915_private *i915);
|
||||
void lpt_fdi_program_mphy(struct drm_i915_private *i915);
|
||||
|
||||
void intel_fdi_link_train(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
void assert_fdi_tx_enabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_fdi_tx_disabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_fdi_rx_enabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_fdi_rx_disabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_fdi_tx_pll_enabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_fdi_rx_pll_enabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
void assert_fdi_rx_pll_disabled(struct drm_i915_private *i915, enum pipe pipe);
|
||||
|
||||
#endif
|
||||
|
@ -62,6 +62,7 @@
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_fbc.h"
|
||||
#include "intel_frontbuffer.h"
|
||||
#include "intel_drrs.h"
|
||||
#include "intel_psr.h"
|
||||
|
||||
/**
|
||||
@ -91,7 +92,7 @@ static void frontbuffer_flush(struct drm_i915_private *i915,
|
||||
trace_intel_frontbuffer_flush(frontbuffer_bits, origin);
|
||||
|
||||
might_sleep();
|
||||
intel_edp_drrs_flush(i915, frontbuffer_bits);
|
||||
intel_drrs_flush(i915, frontbuffer_bits);
|
||||
intel_psr_flush(i915, frontbuffer_bits, origin);
|
||||
intel_fbc_flush(i915, frontbuffer_bits, origin);
|
||||
}
|
||||
@ -180,7 +181,7 @@ void __intel_fb_invalidate(struct intel_frontbuffer *front,
|
||||
|
||||
might_sleep();
|
||||
intel_psr_invalidate(i915, frontbuffer_bits, origin);
|
||||
intel_edp_drrs_invalidate(i915, frontbuffer_bits);
|
||||
intel_drrs_invalidate(i915, frontbuffer_bits);
|
||||
intel_fbc_invalidate(i915, frontbuffer_bits, origin);
|
||||
}
|
||||
|
||||
|
@ -33,11 +33,11 @@
|
||||
struct drm_i915_private;
|
||||
|
||||
enum fb_op_origin {
|
||||
ORIGIN_GTT,
|
||||
ORIGIN_CPU,
|
||||
ORIGIN_CPU = 0,
|
||||
ORIGIN_CS,
|
||||
ORIGIN_FLIP,
|
||||
ORIGIN_DIRTYFB,
|
||||
ORIGIN_CURSOR_UPDATE,
|
||||
};
|
||||
|
||||
struct intel_frontbuffer {
|
||||
|
@ -33,21 +33,6 @@ static int intel_conn_to_vcpi(struct intel_connector *connector)
|
||||
return connector->port ? connector->port->vcpi.vcpi : 0;
|
||||
}
|
||||
|
||||
static bool
|
||||
intel_streams_type1_capable(struct intel_connector *connector)
|
||||
{
|
||||
const struct intel_hdcp_shim *shim = connector->hdcp.shim;
|
||||
bool capable = false;
|
||||
|
||||
if (!shim)
|
||||
return capable;
|
||||
|
||||
if (shim->streams_type1_capable)
|
||||
shim->streams_type1_capable(connector, &capable);
|
||||
|
||||
return capable;
|
||||
}
|
||||
|
||||
/*
|
||||
* intel_hdcp_required_content_stream selects the most highest common possible HDCP
|
||||
* content_type for all streams in DP MST topology because security f/w doesn't
|
||||
@ -86,7 +71,7 @@ intel_hdcp_required_content_stream(struct intel_digital_port *dig_port)
|
||||
if (conn_dig_port != dig_port)
|
||||
continue;
|
||||
|
||||
if (!enforce_type0 && !intel_streams_type1_capable(connector))
|
||||
if (!enforce_type0 && !dig_port->hdcp_mst_type1_capable)
|
||||
enforce_type0 = true;
|
||||
|
||||
data->streams[data->k].stream_id = intel_conn_to_vcpi(connector);
|
||||
@ -112,6 +97,25 @@ intel_hdcp_required_content_stream(struct intel_digital_port *dig_port)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_hdcp_prepare_streams(struct intel_connector *connector)
|
||||
{
|
||||
struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
|
||||
struct hdcp_port_data *data = &dig_port->hdcp_port_data;
|
||||
struct intel_hdcp *hdcp = &connector->hdcp;
|
||||
int ret;
|
||||
|
||||
if (!intel_encoder_is_mst(intel_attached_encoder(connector))) {
|
||||
data->k = 1;
|
||||
data->streams[0].stream_type = hdcp->content_type;
|
||||
} else {
|
||||
ret = intel_hdcp_required_content_stream(dig_port);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static
|
||||
bool intel_hdcp_is_ksv_valid(u8 *ksv)
|
||||
{
|
||||
@ -1632,6 +1636,14 @@ int hdcp2_authenticate_repeater_topology(struct intel_connector *connector)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* MST topology is not Type 1 capable if it contains a downstream
|
||||
* device that is only HDCP 1.x or Legacy HDCP 2.0/2.1 compliant.
|
||||
*/
|
||||
dig_port->hdcp_mst_type1_capable =
|
||||
!HDCP_2_2_HDCP1_DEVICE_CONNECTED(rx_info[1]) &&
|
||||
!HDCP_2_2_HDCP_2_0_REP_CONNECTED(rx_info[1]);
|
||||
|
||||
/* Converting and Storing the seq_num_v to local variable as DWORD */
|
||||
seq_num_v =
|
||||
drm_hdcp_be24_to_cpu((const u8 *)msgs.recvid_list.seq_num_v);
|
||||
@ -1876,6 +1888,14 @@ static int hdcp2_authenticate_and_encrypt(struct intel_connector *connector)
|
||||
for (i = 0; i < tries && !dig_port->hdcp_auth_status; i++) {
|
||||
ret = hdcp2_authenticate_sink(connector);
|
||||
if (!ret) {
|
||||
ret = intel_hdcp_prepare_streams(connector);
|
||||
if (ret) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Prepare streams failed.(%d)\n",
|
||||
ret);
|
||||
break;
|
||||
}
|
||||
|
||||
ret = hdcp2_propagate_stream_management_info(connector);
|
||||
if (ret) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
@ -1921,9 +1941,7 @@ static int hdcp2_authenticate_and_encrypt(struct intel_connector *connector)
|
||||
|
||||
static int _intel_hdcp2_enable(struct intel_connector *connector)
|
||||
{
|
||||
struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
|
||||
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
||||
struct hdcp_port_data *data = &dig_port->hdcp_port_data;
|
||||
struct intel_hdcp *hdcp = &connector->hdcp;
|
||||
int ret;
|
||||
|
||||
@ -1931,16 +1949,6 @@ static int _intel_hdcp2_enable(struct intel_connector *connector)
|
||||
connector->base.name, connector->base.base.id,
|
||||
hdcp->content_type);
|
||||
|
||||
/* Stream which requires encryption */
|
||||
if (!intel_encoder_is_mst(intel_attached_encoder(connector))) {
|
||||
data->k = 1;
|
||||
data->streams[0].stream_type = hdcp->content_type;
|
||||
} else {
|
||||
ret = intel_hdcp_required_content_stream(dig_port);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = hdcp2_authenticate_and_encrypt(connector);
|
||||
if (ret) {
|
||||
drm_dbg_kms(&i915->drm, "HDCP2 Type%d Enabling Failed. (%d)\n",
|
||||
|
@ -53,21 +53,20 @@
|
||||
#include "intel_panel.h"
|
||||
#include "intel_snps_phy.h"
|
||||
|
||||
static struct drm_device *intel_hdmi_to_dev(struct intel_hdmi *intel_hdmi)
|
||||
static struct drm_i915_private *intel_hdmi_to_i915(struct intel_hdmi *intel_hdmi)
|
||||
{
|
||||
return hdmi_to_dig_port(intel_hdmi)->base.base.dev;
|
||||
return to_i915(hdmi_to_dig_port(intel_hdmi)->base.base.dev);
|
||||
}
|
||||
|
||||
static void
|
||||
assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
|
||||
{
|
||||
struct drm_device *dev = intel_hdmi_to_dev(intel_hdmi);
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(intel_hdmi);
|
||||
u32 enabled_bits;
|
||||
|
||||
enabled_bits = HAS_DDI(dev_priv) ? DDI_BUF_CTL_ENABLE : SDVO_ENABLE;
|
||||
|
||||
drm_WARN(dev,
|
||||
drm_WARN(&dev_priv->drm,
|
||||
intel_de_read(dev_priv, intel_hdmi->hdmi_reg) & enabled_bits,
|
||||
"HDMI port enabled, expecting disabled\n");
|
||||
}
|
||||
@ -1246,7 +1245,7 @@ static void hsw_set_infoframes(struct intel_encoder *encoder,
|
||||
|
||||
void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(intel_hdmi_to_dev(hdmi));
|
||||
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
|
||||
struct i2c_adapter *adapter =
|
||||
intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus);
|
||||
|
||||
@ -1703,7 +1702,7 @@ int intel_hdmi_hdcp2_read_msg(struct intel_digital_port *dig_port,
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"msg_sz(%zd) is more than exp size(%zu)\n",
|
||||
ret, size);
|
||||
return -1;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
offset = HDCP_2_2_HDMI_REG_RD_MSG_OFFSET;
|
||||
@ -1830,7 +1829,7 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
|
||||
int clock, bool respect_downstream_limits,
|
||||
bool has_hdmi_sink)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(intel_hdmi_to_dev(hdmi));
|
||||
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
|
||||
|
||||
if (clock < 25000)
|
||||
return MODE_CLOCK_LOW;
|
||||
@ -1946,8 +1945,7 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode)
|
||||
{
|
||||
struct intel_hdmi *hdmi = intel_attached_hdmi(to_intel_connector(connector));
|
||||
struct drm_device *dev = intel_hdmi_to_dev(hdmi);
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
|
||||
enum drm_mode_status status;
|
||||
int clock = mode->clock;
|
||||
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
|
||||
@ -2210,7 +2208,7 @@ int intel_hdmi_compute_config(struct intel_encoder *encoder,
|
||||
return ret;
|
||||
|
||||
if (pipe_config->output_format == INTEL_OUTPUT_FORMAT_YCBCR420) {
|
||||
ret = intel_pch_panel_fitting(pipe_config, conn_state);
|
||||
ret = intel_panel_fitting(pipe_config, conn_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
@ -215,8 +215,8 @@ intel_hpd_irq_storm_switch_to_polling(struct drm_i915_private *dev_priv)
|
||||
|
||||
static void intel_hpd_irq_setup(struct drm_i915_private *i915)
|
||||
{
|
||||
if (i915->display_irqs_enabled && i915->display.hpd_irq_setup)
|
||||
i915->display.hpd_irq_setup(i915);
|
||||
if (i915->display_irqs_enabled && i915->hotplug_funcs->hpd_irq_setup)
|
||||
i915->hotplug_funcs->hpd_irq_setup(i915);
|
||||
}
|
||||
|
||||
static void intel_hpd_irq_storm_reenable_work(struct work_struct *work)
|
||||
|
@ -40,9 +40,12 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_connector.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dpll.h"
|
||||
#include "intel_fdi.h"
|
||||
#include "intel_gmbus.h"
|
||||
#include "intel_lvds.h"
|
||||
#include "intel_panel.h"
|
||||
@ -323,7 +326,7 @@ static void intel_enable_lvds(struct intel_atomic_state *state,
|
||||
drm_err(&dev_priv->drm,
|
||||
"timed out waiting for panel to power on\n");
|
||||
|
||||
intel_panel_enable_backlight(pipe_config, conn_state);
|
||||
intel_backlight_enable(pipe_config, conn_state);
|
||||
}
|
||||
|
||||
static void intel_disable_lvds(struct intel_atomic_state *state,
|
||||
@ -351,7 +354,7 @@ static void gmch_disable_lvds(struct intel_atomic_state *state,
|
||||
const struct drm_connector_state *old_conn_state)
|
||||
|
||||
{
|
||||
intel_panel_disable_backlight(old_conn_state);
|
||||
intel_backlight_disable(old_conn_state);
|
||||
|
||||
intel_disable_lvds(state, encoder, old_crtc_state, old_conn_state);
|
||||
}
|
||||
@ -361,7 +364,7 @@ static void pch_disable_lvds(struct intel_atomic_state *state,
|
||||
const struct intel_crtc_state *old_crtc_state,
|
||||
const struct drm_connector_state *old_conn_state)
|
||||
{
|
||||
intel_panel_disable_backlight(old_conn_state);
|
||||
intel_backlight_disable(old_conn_state);
|
||||
}
|
||||
|
||||
static void pch_post_disable_lvds(struct intel_atomic_state *state,
|
||||
@ -388,13 +391,15 @@ intel_lvds_mode_valid(struct drm_connector *connector,
|
||||
struct intel_connector *intel_connector = to_intel_connector(connector);
|
||||
struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
|
||||
int max_pixclk = to_i915(connector->dev)->max_dotclk_freq;
|
||||
enum drm_mode_status status;
|
||||
|
||||
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
return MODE_NO_DBLESCAN;
|
||||
if (mode->hdisplay > fixed_mode->hdisplay)
|
||||
return MODE_PANEL;
|
||||
if (mode->vdisplay > fixed_mode->vdisplay)
|
||||
return MODE_PANEL;
|
||||
|
||||
status = intel_panel_mode_valid(intel_connector, mode);
|
||||
if (status != MODE_OK)
|
||||
return status;
|
||||
|
||||
if (fixed_mode->clock > max_pixclk)
|
||||
return MODE_CLOCK_HIGH;
|
||||
|
||||
@ -441,8 +446,9 @@ static int intel_lvds_compute_config(struct intel_encoder *intel_encoder,
|
||||
* with the panel scaling set up to source from the H/VDisplay
|
||||
* of the original mode.
|
||||
*/
|
||||
intel_fixed_panel_mode(intel_connector->panel.fixed_mode,
|
||||
adjusted_mode);
|
||||
ret = intel_panel_compute_config(intel_connector, adjusted_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
return -EINVAL;
|
||||
@ -450,10 +456,7 @@ static int intel_lvds_compute_config(struct intel_encoder *intel_encoder,
|
||||
if (HAS_PCH_SPLIT(dev_priv))
|
||||
pipe_config->has_pch_encoder = true;
|
||||
|
||||
if (HAS_GMCH(dev_priv))
|
||||
ret = intel_gmch_panel_fitting(pipe_config, conn_state);
|
||||
else
|
||||
ret = intel_pch_panel_fitting(pipe_config, conn_state);
|
||||
ret = intel_panel_fitting(pipe_config, conn_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -906,7 +909,7 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
|
||||
}
|
||||
intel_encoder->get_hw_state = intel_lvds_get_hw_state;
|
||||
intel_encoder->get_config = intel_lvds_get_config;
|
||||
intel_encoder->update_pipe = intel_panel_update_backlight;
|
||||
intel_encoder->update_pipe = intel_backlight_update;
|
||||
intel_encoder->shutdown = intel_lvds_shutdown;
|
||||
intel_connector->get_hw_state = intel_connector_get_hw_state;
|
||||
|
||||
@ -999,7 +1002,7 @@ out:
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
|
||||
intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode);
|
||||
intel_panel_setup_backlight(connector, INVALID_PIPE);
|
||||
intel_backlight_setup(intel_connector, INVALID_PIPE);
|
||||
|
||||
lvds_encoder->is_dual_link = compute_is_dual_link_lvds(lvds_encoder);
|
||||
drm_dbg_kms(&dev_priv->drm, "detected %s-link lvds configuration\n",
|
||||
|
@ -30,10 +30,9 @@
|
||||
#include <linux/firmware.h>
|
||||
#include <acpi/video.h>
|
||||
|
||||
#include "display/intel_panel.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_acpi.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_opregion.h"
|
||||
|
||||
@ -450,7 +449,7 @@ static u32 asle_set_backlight(struct drm_i915_private *dev_priv, u32 bclp)
|
||||
bclp);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
for_each_intel_connector_iter(connector, &conn_iter)
|
||||
intel_panel_set_backlight_acpi(connector->base.state, bclp, 255);
|
||||
intel_backlight_set_acpi(connector->base.state, bclp, 255);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
asle->cblv = DIV_ROUND_UP(bclp * 100, 255) | ASLE_CBLV_VALID;
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -8,15 +8,13 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "intel_display.h"
|
||||
|
||||
enum drm_connector_status;
|
||||
struct drm_connector;
|
||||
struct drm_connector_state;
|
||||
struct drm_display_mode;
|
||||
struct drm_i915_private;
|
||||
struct intel_connector;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_encoder;
|
||||
struct intel_panel;
|
||||
|
||||
int intel_panel_init(struct intel_panel *panel,
|
||||
@ -25,23 +23,16 @@ int intel_panel_init(struct intel_panel *panel,
|
||||
void intel_panel_fini(struct intel_panel *panel);
|
||||
enum drm_connector_status
|
||||
intel_panel_detect(struct drm_connector *connector, bool force);
|
||||
void intel_fixed_panel_mode(const struct drm_display_mode *fixed_mode,
|
||||
bool intel_panel_use_ssc(struct drm_i915_private *i915);
|
||||
void intel_panel_fixed_mode(const struct drm_display_mode *fixed_mode,
|
||||
struct drm_display_mode *adjusted_mode);
|
||||
int intel_pch_panel_fitting(struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
int intel_gmch_panel_fitting(struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void intel_panel_set_backlight_acpi(const struct drm_connector_state *conn_state,
|
||||
u32 level, u32 max);
|
||||
int intel_panel_setup_backlight(struct drm_connector *connector,
|
||||
enum pipe pipe);
|
||||
void intel_panel_enable_backlight(const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void intel_panel_update_backlight(struct intel_atomic_state *state,
|
||||
struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void intel_panel_disable_backlight(const struct drm_connector_state *old_conn_state);
|
||||
enum drm_mode_status
|
||||
intel_panel_mode_valid(struct intel_connector *connector,
|
||||
const struct drm_display_mode *mode);
|
||||
int intel_panel_fitting(struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
int intel_panel_compute_config(struct intel_connector *connector,
|
||||
struct drm_display_mode *adjusted_mode);
|
||||
struct drm_display_mode *
|
||||
intel_panel_edid_downclock_mode(struct intel_connector *connector,
|
||||
const struct drm_display_mode *fixed_mode);
|
||||
@ -49,22 +40,5 @@ struct drm_display_mode *
|
||||
intel_panel_edid_fixed_mode(struct intel_connector *connector);
|
||||
struct drm_display_mode *
|
||||
intel_panel_vbt_fixed_mode(struct intel_connector *connector);
|
||||
void intel_panel_set_pwm_level(const struct drm_connector_state *conn_state, u32 level);
|
||||
u32 intel_panel_invert_pwm_level(struct intel_connector *connector, u32 level);
|
||||
u32 intel_panel_backlight_level_to_pwm(struct intel_connector *connector, u32 level);
|
||||
u32 intel_panel_backlight_level_from_pwm(struct intel_connector *connector, u32 val);
|
||||
|
||||
#if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE)
|
||||
int intel_backlight_device_register(struct intel_connector *connector);
|
||||
void intel_backlight_device_unregister(struct intel_connector *connector);
|
||||
#else /* CONFIG_BACKLIGHT_CLASS_DEVICE */
|
||||
static inline int intel_backlight_device_register(struct intel_connector *connector)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void intel_backlight_device_unregister(struct intel_connector *connector)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_BACKLIGHT_CLASS_DEVICE */
|
||||
|
||||
#endif /* __INTEL_PANEL_H__ */
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dp.h"
|
||||
#include "intel_dpll.h"
|
||||
#include "intel_lvds.h"
|
||||
#include "intel_pps.h"
|
||||
|
||||
static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv,
|
||||
@ -1408,3 +1409,61 @@ void intel_pps_setup(struct drm_i915_private *i915)
|
||||
else
|
||||
i915->pps_mmio_base = PPS_BASE;
|
||||
}
|
||||
|
||||
void assert_pps_unlocked(struct drm_i915_private *dev_priv, enum pipe pipe)
|
||||
{
|
||||
i915_reg_t pp_reg;
|
||||
u32 val;
|
||||
enum pipe panel_pipe = INVALID_PIPE;
|
||||
bool locked = true;
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, HAS_DDI(dev_priv)))
|
||||
return;
|
||||
|
||||
if (HAS_PCH_SPLIT(dev_priv)) {
|
||||
u32 port_sel;
|
||||
|
||||
pp_reg = PP_CONTROL(0);
|
||||
port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(0)) & PANEL_PORT_SELECT_MASK;
|
||||
|
||||
switch (port_sel) {
|
||||
case PANEL_PORT_SELECT_LVDS:
|
||||
intel_lvds_port_enabled(dev_priv, PCH_LVDS, &panel_pipe);
|
||||
break;
|
||||
case PANEL_PORT_SELECT_DPA:
|
||||
g4x_dp_port_enabled(dev_priv, DP_A, PORT_A, &panel_pipe);
|
||||
break;
|
||||
case PANEL_PORT_SELECT_DPC:
|
||||
g4x_dp_port_enabled(dev_priv, PCH_DP_C, PORT_C, &panel_pipe);
|
||||
break;
|
||||
case PANEL_PORT_SELECT_DPD:
|
||||
g4x_dp_port_enabled(dev_priv, PCH_DP_D, PORT_D, &panel_pipe);
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(port_sel);
|
||||
break;
|
||||
}
|
||||
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
|
||||
/* presumably write lock depends on pipe, not port select */
|
||||
pp_reg = PP_CONTROL(pipe);
|
||||
panel_pipe = pipe;
|
||||
} else {
|
||||
u32 port_sel;
|
||||
|
||||
pp_reg = PP_CONTROL(0);
|
||||
port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(0)) & PANEL_PORT_SELECT_MASK;
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm,
|
||||
port_sel != PANEL_PORT_SELECT_LVDS);
|
||||
intel_lvds_port_enabled(dev_priv, LVDS, &panel_pipe);
|
||||
}
|
||||
|
||||
val = intel_de_read(dev_priv, pp_reg);
|
||||
if (!(val & PANEL_POWER_ON) ||
|
||||
((val & PANEL_UNLOCK_MASK) == PANEL_UNLOCK_REGS))
|
||||
locked = false;
|
||||
|
||||
I915_STATE_WARN(panel_pipe == pipe && locked,
|
||||
"panel assertion failure, pipe %c regs locked\n",
|
||||
pipe_name(pipe));
|
||||
}
|
||||
|
@ -10,6 +10,7 @@
|
||||
|
||||
#include "intel_wakeref.h"
|
||||
|
||||
enum pipe;
|
||||
struct drm_i915_private;
|
||||
struct intel_connector;
|
||||
struct intel_crtc_state;
|
||||
@ -49,4 +50,6 @@ void vlv_pps_init(struct intel_encoder *encoder,
|
||||
void intel_pps_unlock_regs_wa(struct drm_i915_private *i915);
|
||||
void intel_pps_setup(struct drm_i915_private *i915);
|
||||
|
||||
void assert_pps_unlocked(struct drm_i915_private *i915, enum pipe pipe);
|
||||
|
||||
#endif /* __INTEL_PPS_H__ */
|
||||
|
@ -22,6 +22,7 @@
|
||||
*/
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_damage_helper.h>
|
||||
|
||||
#include "display/intel_dp.h"
|
||||
|
||||
@ -364,41 +365,6 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp)
|
||||
}
|
||||
}
|
||||
|
||||
static void hsw_psr_setup_aux(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
u32 aux_clock_divider, aux_ctl;
|
||||
int i;
|
||||
static const u8 aux_msg[] = {
|
||||
[0] = DP_AUX_NATIVE_WRITE << 4,
|
||||
[1] = DP_SET_POWER >> 8,
|
||||
[2] = DP_SET_POWER & 0xff,
|
||||
[3] = 1 - 1,
|
||||
[4] = DP_SET_POWER_D0,
|
||||
};
|
||||
u32 psr_aux_mask = EDP_PSR_AUX_CTL_TIME_OUT_MASK |
|
||||
EDP_PSR_AUX_CTL_MESSAGE_SIZE_MASK |
|
||||
EDP_PSR_AUX_CTL_PRECHARGE_2US_MASK |
|
||||
EDP_PSR_AUX_CTL_BIT_CLOCK_2X_MASK;
|
||||
|
||||
BUILD_BUG_ON(sizeof(aux_msg) > 20);
|
||||
for (i = 0; i < sizeof(aux_msg); i += 4)
|
||||
intel_de_write(dev_priv,
|
||||
EDP_PSR_AUX_DATA(intel_dp->psr.transcoder, i >> 2),
|
||||
intel_dp_pack_aux(&aux_msg[i], sizeof(aux_msg) - i));
|
||||
|
||||
aux_clock_divider = intel_dp->get_aux_clock_divider(intel_dp, 0);
|
||||
|
||||
/* Start with bits set for DDI_AUX_CTL register */
|
||||
aux_ctl = intel_dp->get_aux_send_ctl(intel_dp, sizeof(aux_msg),
|
||||
aux_clock_divider);
|
||||
|
||||
/* Select only valid bits for SRD_AUX_CTL */
|
||||
aux_ctl &= psr_aux_mask;
|
||||
intel_de_write(dev_priv, EDP_PSR_AUX_CTL(intel_dp->psr.transcoder),
|
||||
aux_ctl);
|
||||
}
|
||||
|
||||
static void intel_psr_enable_sink(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
@ -460,7 +426,7 @@ static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
|
||||
val |= EDP_PSR_TP2_TP3_TIME_2500us;
|
||||
|
||||
check_tp3_sel:
|
||||
if (intel_dp_source_supports_hbr2(intel_dp) &&
|
||||
if (intel_dp_source_supports_tps3(dev_priv) &&
|
||||
drm_dp_tps3_supported(intel_dp->dpcd))
|
||||
val |= EDP_PSR_TP1_TP3_SEL;
|
||||
else
|
||||
@ -545,7 +511,7 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
|
||||
if (DISPLAY_VER(dev_priv) >= 10 && DISPLAY_VER(dev_priv) <= 12)
|
||||
val |= EDP_Y_COORDINATE_ENABLE;
|
||||
|
||||
val |= EDP_PSR2_FRAME_BEFORE_SU(intel_dp->psr.sink_sync_latency + 1);
|
||||
val |= EDP_PSR2_FRAME_BEFORE_SU(max_t(u8, intel_dp->psr.sink_sync_latency + 1, 2));
|
||||
val |= intel_psr2_get_tp_time(intel_dp);
|
||||
|
||||
/* Wa_22012278275:adl-p */
|
||||
@ -595,15 +561,16 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
|
||||
val |= EDP_PSR2_SU_SDP_SCANLINE;
|
||||
|
||||
if (intel_dp->psr.psr2_sel_fetch_enabled) {
|
||||
u32 tmp;
|
||||
|
||||
/* Wa_1408330847 */
|
||||
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
|
||||
intel_de_rmw(dev_priv, CHICKEN_PAR1_1,
|
||||
DIS_RAM_BYPASS_PSR2_MAN_TRACK,
|
||||
DIS_RAM_BYPASS_PSR2_MAN_TRACK);
|
||||
|
||||
intel_de_write(dev_priv,
|
||||
PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder),
|
||||
PSR2_MAN_TRK_CTL_ENABLE);
|
||||
tmp = intel_de_read(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder));
|
||||
drm_WARN_ON(&dev_priv->drm, !(tmp & PSR2_MAN_TRK_CTL_ENABLE));
|
||||
} else if (HAS_PSR2_SEL_FETCH(dev_priv)) {
|
||||
intel_de_write(dev_priv,
|
||||
PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), 0);
|
||||
@ -621,9 +588,7 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
|
||||
static bool
|
||||
transcoder_has_psr2(struct drm_i915_private *dev_priv, enum transcoder trans)
|
||||
{
|
||||
if (DISPLAY_VER(dev_priv) < 9)
|
||||
return false;
|
||||
else if (DISPLAY_VER(dev_priv) >= 12)
|
||||
if (DISPLAY_VER(dev_priv) >= 12)
|
||||
return trans == TRANSCODER_A;
|
||||
else
|
||||
return trans == TRANSCODER_EDP;
|
||||
@ -755,11 +720,7 @@ tgl_dc3co_exitline_compute_config(struct intel_dp *intel_dp,
|
||||
static bool intel_psr2_sel_fetch_config_valid(struct intel_dp *intel_dp,
|
||||
struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_atomic_state *state = to_intel_atomic_state(crtc_state->uapi.state);
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
struct intel_plane_state *plane_state;
|
||||
struct intel_plane *plane;
|
||||
int i;
|
||||
|
||||
if (!dev_priv->params.enable_psr2_sel_fetch &&
|
||||
intel_dp->psr.debug != I915_PSR_DEBUG_ENABLE_SEL_FETCH) {
|
||||
@ -774,14 +735,6 @@ static bool intel_psr2_sel_fetch_config_valid(struct intel_dp *intel_dp,
|
||||
return false;
|
||||
}
|
||||
|
||||
for_each_new_intel_plane_in_state(state, plane, plane_state, i) {
|
||||
if (plane_state->uapi.rotation != DRM_MODE_ROTATE_0) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"PSR2 sel fetch not enabled, plane rotated\n");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/* Wa_14010254185 Wa_14010103792 */
|
||||
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_C0)) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
@ -877,12 +830,8 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* We are missing the implementation of some workarounds to enabled PSR2
|
||||
* in Alderlake_P, until ready PSR2 should be kept disabled.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(dev_priv)) {
|
||||
drm_dbg_kms(&dev_priv->drm, "PSR2 is missing the implementation of workarounds\n");
|
||||
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) {
|
||||
drm_dbg_kms(&dev_priv->drm, "PSR2 not completely functional in this stepping\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -985,7 +934,8 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
|
||||
}
|
||||
|
||||
void intel_psr_compute_config(struct intel_dp *intel_dp,
|
||||
struct intel_crtc_state *crtc_state)
|
||||
struct intel_crtc_state *crtc_state,
|
||||
struct drm_connector_state *conn_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
const struct drm_display_mode *adjusted_mode =
|
||||
@ -1037,7 +987,10 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
|
||||
|
||||
crtc_state->has_psr = true;
|
||||
crtc_state->has_psr2 = intel_psr2_config_valid(intel_dp, crtc_state);
|
||||
|
||||
crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_VSC);
|
||||
intel_dp_compute_psr_vsc_sdp(intel_dp, crtc_state, conn_state,
|
||||
&crtc_state->psr_vsc);
|
||||
}
|
||||
|
||||
void intel_psr_get_config(struct intel_encoder *encoder,
|
||||
@ -1114,12 +1067,6 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp)
|
||||
enum transcoder cpu_transcoder = intel_dp->psr.transcoder;
|
||||
u32 mask;
|
||||
|
||||
/* Only HSW and BDW have PSR AUX registers that need to be setup. SKL+
|
||||
* use hardcoded values PSR AUX transactions
|
||||
*/
|
||||
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
|
||||
hsw_psr_setup_aux(intel_dp);
|
||||
|
||||
if (intel_dp->psr.psr2_enabled && DISPLAY_VER(dev_priv) == 9) {
|
||||
i915_reg_t reg = CHICKEN_TRANS(cpu_transcoder);
|
||||
u32 chicken = intel_de_read(dev_priv, reg);
|
||||
@ -1129,6 +1076,16 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp)
|
||||
intel_de_write(dev_priv, reg, chicken);
|
||||
}
|
||||
|
||||
/*
|
||||
* Wa_16014451276:adlp
|
||||
* All supported adlp panels have 1-based X granularity, this may
|
||||
* cause issues if non-supported panels are used.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(dev_priv) &&
|
||||
intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 0,
|
||||
ADLP_1_BASED_X_GRANULARITY);
|
||||
|
||||
/*
|
||||
* Per Spec: Avoid continuous PSR exit by masking MEMUP and HPD also
|
||||
* mask LPSP to avoid dependency on other drivers that might block
|
||||
@ -1174,6 +1131,11 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp)
|
||||
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
|
||||
TRANS_SET_CONTEXT_LATENCY_MASK,
|
||||
TRANS_SET_CONTEXT_LATENCY_VALUE(1));
|
||||
|
||||
/* Wa_16012604467:adlp */
|
||||
if (IS_ALDERLAKE_P(dev_priv) && intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC, 0,
|
||||
CLKGATE_DIS_MISC_DMASC_GATING_DIS);
|
||||
}
|
||||
|
||||
static bool psr_interrupt_error_check(struct intel_dp *intel_dp)
|
||||
@ -1208,8 +1170,7 @@ static bool psr_interrupt_error_check(struct intel_dp *intel_dp)
|
||||
}
|
||||
|
||||
static void intel_psr_enable_locked(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state)
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
@ -1236,9 +1197,7 @@ static void intel_psr_enable_locked(struct intel_dp *intel_dp,
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "Enabling PSR%s\n",
|
||||
intel_dp->psr.psr2_enabled ? "2" : "1");
|
||||
intel_dp_compute_psr_vsc_sdp(intel_dp, crtc_state, conn_state,
|
||||
&intel_dp->psr.vsc);
|
||||
intel_write_dp_vsc_sdp(encoder, crtc_state, &intel_dp->psr.vsc);
|
||||
intel_write_dp_vsc_sdp(encoder, crtc_state, &crtc_state->psr_vsc);
|
||||
intel_snps_phy_update_psr_power_state(dev_priv, phy, true);
|
||||
intel_psr_enable_sink(intel_dp);
|
||||
intel_psr_enable_source(intel_dp);
|
||||
@ -1248,33 +1207,6 @@ static void intel_psr_enable_locked(struct intel_dp *intel_dp,
|
||||
intel_psr_activate(intel_dp);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_psr_enable - Enable PSR
|
||||
* @intel_dp: Intel DP
|
||||
* @crtc_state: new CRTC state
|
||||
* @conn_state: new CONNECTOR state
|
||||
*
|
||||
* This function can only be called after the pipe is fully trained and enabled.
|
||||
*/
|
||||
void intel_psr_enable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
if (!CAN_PSR(intel_dp))
|
||||
return;
|
||||
|
||||
if (!crtc_state->has_psr)
|
||||
return;
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm, dev_priv->drrs.dp);
|
||||
|
||||
mutex_lock(&intel_dp->psr.lock);
|
||||
intel_psr_enable_locked(intel_dp, crtc_state, conn_state);
|
||||
mutex_unlock(&intel_dp->psr.lock);
|
||||
}
|
||||
|
||||
static void intel_psr_exit(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
@ -1363,6 +1295,11 @@ static void intel_psr_disable_locked(struct intel_dp *intel_dp)
|
||||
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
|
||||
TRANS_SET_CONTEXT_LATENCY_MASK, 0);
|
||||
|
||||
/* Wa_16012604467:adlp */
|
||||
if (IS_ALDERLAKE_P(dev_priv) && intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC,
|
||||
CLKGATE_DIS_MISC_DMASC_GATING_DIS, 0);
|
||||
|
||||
intel_snps_phy_update_psr_power_state(dev_priv, phy, false);
|
||||
|
||||
/* Disable PSR on Sink */
|
||||
@ -1456,27 +1393,48 @@ unlock:
|
||||
mutex_unlock(&psr->lock);
|
||||
}
|
||||
|
||||
static inline u32 man_trk_ctl_single_full_frame_bit_get(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
return IS_ALDERLAKE_P(dev_priv) ?
|
||||
ADLP_PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME :
|
||||
PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME;
|
||||
}
|
||||
|
||||
static void psr_force_hw_tracking_exit(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
/*
|
||||
* Display WA #0884: skl+
|
||||
* This documented WA for bxt can be safely applied
|
||||
* broadly so we can force HW tracking to exit PSR
|
||||
* instead of disabling and re-enabling.
|
||||
* Workaround tells us to write 0 to CUR_SURFLIVE_A,
|
||||
* but it makes more sense write to the current active
|
||||
* pipe.
|
||||
*/
|
||||
intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
|
||||
else
|
||||
/*
|
||||
* A write to CURSURFLIVE do not cause HW tracking to exit PSR
|
||||
* on older gens so doing the manual exit instead.
|
||||
*/
|
||||
intel_psr_exit(intel_dp);
|
||||
if (intel_dp->psr.psr2_sel_fetch_enabled)
|
||||
intel_de_rmw(dev_priv,
|
||||
PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), 0,
|
||||
man_trk_ctl_single_full_frame_bit_get(dev_priv));
|
||||
|
||||
/*
|
||||
* Display WA #0884: skl+
|
||||
* This documented WA for bxt can be safely applied
|
||||
* broadly so we can force HW tracking to exit PSR
|
||||
* instead of disabling and re-enabling.
|
||||
* Workaround tells us to write 0 to CUR_SURFLIVE_A,
|
||||
* but it makes more sense write to the current active
|
||||
* pipe.
|
||||
*
|
||||
* This workaround do not exist for platforms with display 10 or newer
|
||||
* but testing proved that it works for up display 13, for newer
|
||||
* than that testing will be needed.
|
||||
*/
|
||||
intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
|
||||
}
|
||||
|
||||
void intel_psr2_disable_plane_sel_fetch(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return;
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), 0);
|
||||
}
|
||||
|
||||
void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane,
|
||||
@ -1487,17 +1445,17 @@ void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane,
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
const struct drm_rect *clip;
|
||||
u32 val, offset;
|
||||
int ret, x, y;
|
||||
u32 val;
|
||||
int x, y;
|
||||
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return;
|
||||
|
||||
val = plane_state ? plane_state->ctl : 0;
|
||||
val &= plane->id == PLANE_CURSOR ? val : PLANE_SEL_FETCH_CTL_ENABLE;
|
||||
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), val);
|
||||
if (!val || plane->id == PLANE_CURSOR)
|
||||
if (plane->id == PLANE_CURSOR) {
|
||||
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id),
|
||||
plane_state->ctl);
|
||||
return;
|
||||
}
|
||||
|
||||
clip = &plane_state->psr2_sel_fetch_area;
|
||||
|
||||
@ -1508,10 +1466,6 @@ void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane,
|
||||
/* TODO: consider auxiliary surfaces */
|
||||
x = plane_state->uapi.src.x1 >> 16;
|
||||
y = (plane_state->uapi.src.y1 >> 16) + clip->y1;
|
||||
ret = skl_calc_main_surface_offset(plane_state, &x, &y, &offset);
|
||||
if (ret)
|
||||
drm_warn_once(&dev_priv->drm, "skl_calc_main_surface_offset() returned %i\n",
|
||||
ret);
|
||||
val = y << 16 | x;
|
||||
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_OFFSET(pipe, plane->id),
|
||||
val);
|
||||
@ -1520,14 +1474,16 @@ void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane,
|
||||
val = (drm_rect_height(clip) - 1) << 16;
|
||||
val |= (drm_rect_width(&plane_state->uapi.src) >> 16) - 1;
|
||||
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_SIZE(pipe, plane->id), val);
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id),
|
||||
PLANE_SEL_FETCH_CTL_ENABLE);
|
||||
}
|
||||
|
||||
void intel_psr2_program_trans_man_trk_ctl(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
|
||||
|
||||
if (!HAS_PSR2_SEL_FETCH(dev_priv) ||
|
||||
!crtc_state->enable_psr2_sel_fetch)
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return;
|
||||
|
||||
intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(crtc_state->cpu_transcoder),
|
||||
@ -1542,11 +1498,11 @@ static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,
|
||||
u32 val = PSR2_MAN_TRK_CTL_ENABLE;
|
||||
|
||||
if (full_update) {
|
||||
if (IS_ALDERLAKE_P(dev_priv))
|
||||
val |= ADLP_PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME;
|
||||
else
|
||||
val |= PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME;
|
||||
|
||||
/*
|
||||
* Not applying Wa_14014971508:adlp as we do not support the
|
||||
* feature that requires this workaround.
|
||||
*/
|
||||
val |= man_trk_ctl_single_full_frame_bit_get(dev_priv);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
@ -1555,7 +1511,7 @@ static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,
|
||||
|
||||
if (IS_ALDERLAKE_P(dev_priv)) {
|
||||
val |= ADLP_PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(clip->y1);
|
||||
val |= ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(clip->y2);
|
||||
val |= ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(clip->y2 - 1);
|
||||
} else {
|
||||
drm_WARN_ON(crtc_state->uapi.crtc->dev, clip->y1 % 4 || clip->y2 % 4);
|
||||
|
||||
@ -1597,6 +1553,45 @@ static void intel_psr2_sel_fetch_pipe_alignment(const struct intel_crtc_state *c
|
||||
drm_warn(&dev_priv->drm, "Missing PSR2 sel fetch alignment with DSC\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* TODO: Not clear how to handle planes with negative position,
|
||||
* also planes are not updated if they have a negative X
|
||||
* position so for now doing a full update in this cases
|
||||
*
|
||||
* TODO: We are missing multi-planar formats handling, until it is
|
||||
* implemented it will send full frame updates.
|
||||
*
|
||||
* Plane scaling and rotation is not supported by selective fetch and both
|
||||
* properties can change without a modeset, so need to be check at every
|
||||
* atomic commmit.
|
||||
*/
|
||||
static bool psr2_sel_fetch_plane_state_supported(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
if (plane_state->uapi.dst.y1 < 0 ||
|
||||
plane_state->uapi.dst.x1 < 0 ||
|
||||
plane_state->scaler_id >= 0 ||
|
||||
plane_state->hw.fb->format->num_planes > 1 ||
|
||||
plane_state->uapi.rotation != DRM_MODE_ROTATE_0)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check for pipe properties that is not supported by selective fetch.
|
||||
*
|
||||
* TODO: pipe scaling causes a modeset but skl_update_scaler_crtc() is executed
|
||||
* after intel_psr_compute_config(), so for now keeping PSR2 selective fetch
|
||||
* enabled and going to the full update path.
|
||||
*/
|
||||
static bool psr2_sel_fetch_pipe_state_supported(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
if (crtc_state->scaler_state.scaler_id >= 0)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
@ -1610,9 +1605,10 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return 0;
|
||||
|
||||
ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (!psr2_sel_fetch_pipe_state_supported(crtc_state)) {
|
||||
full_update = true;
|
||||
goto skip_sel_fetch_set_loop;
|
||||
}
|
||||
|
||||
/*
|
||||
* Calculate minimal selective fetch area of each plane and calculate
|
||||
@ -1623,8 +1619,8 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
for_each_oldnew_intel_plane_in_state(state, plane, old_plane_state,
|
||||
new_plane_state, i) {
|
||||
struct drm_rect src, damaged_area = { .y1 = -1 };
|
||||
struct drm_mode_rect *damaged_clips;
|
||||
u32 num_clips, j;
|
||||
struct drm_atomic_helper_damage_iter iter;
|
||||
struct drm_rect clip;
|
||||
|
||||
if (new_plane_state->uapi.crtc != crtc_state->uapi.crtc)
|
||||
continue;
|
||||
@ -1633,19 +1629,11 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
!old_plane_state->uapi.visible)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* TODO: Not clear how to handle planes with negative position,
|
||||
* also planes are not updated if they have a negative X
|
||||
* position so for now doing a full update in this cases
|
||||
*/
|
||||
if (new_plane_state->uapi.dst.y1 < 0 ||
|
||||
new_plane_state->uapi.dst.x1 < 0) {
|
||||
if (!psr2_sel_fetch_plane_state_supported(new_plane_state)) {
|
||||
full_update = true;
|
||||
break;
|
||||
}
|
||||
|
||||
num_clips = drm_plane_get_damage_clips_count(&new_plane_state->uapi);
|
||||
|
||||
/*
|
||||
* If visibility or plane moved, mark the whole plane area as
|
||||
* damaged as it needs to be complete redraw in the new and old
|
||||
@ -1666,14 +1654,8 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
clip_area_update(&pipe_clip, &damaged_area);
|
||||
}
|
||||
continue;
|
||||
} else if (new_plane_state->uapi.alpha != old_plane_state->uapi.alpha ||
|
||||
(!num_clips &&
|
||||
new_plane_state->uapi.fb != old_plane_state->uapi.fb)) {
|
||||
/*
|
||||
* If the plane don't have damaged areas but the
|
||||
* framebuffer changed or alpha changed, mark the whole
|
||||
* plane area as damaged.
|
||||
*/
|
||||
} else if (new_plane_state->uapi.alpha != old_plane_state->uapi.alpha) {
|
||||
/* If alpha changed mark the whole plane area as damaged */
|
||||
damaged_area.y1 = new_plane_state->uapi.dst.y1;
|
||||
damaged_area.y2 = new_plane_state->uapi.dst.y2;
|
||||
clip_area_update(&pipe_clip, &damaged_area);
|
||||
@ -1681,15 +1663,11 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
}
|
||||
|
||||
drm_rect_fp_to_int(&src, &new_plane_state->uapi.src);
|
||||
damaged_clips = drm_plane_get_damage_clips(&new_plane_state->uapi);
|
||||
|
||||
for (j = 0; j < num_clips; j++) {
|
||||
struct drm_rect clip;
|
||||
|
||||
clip.x1 = damaged_clips[j].x1;
|
||||
clip.y1 = damaged_clips[j].y1;
|
||||
clip.x2 = damaged_clips[j].x2;
|
||||
clip.y2 = damaged_clips[j].y2;
|
||||
drm_atomic_helper_damage_iter_init(&iter,
|
||||
&old_plane_state->uapi,
|
||||
&new_plane_state->uapi);
|
||||
drm_atomic_for_each_plane_damage(&iter, &clip) {
|
||||
if (drm_rect_intersect(&clip, &src))
|
||||
clip_area_update(&damaged_area, &clip);
|
||||
}
|
||||
@ -1705,6 +1683,10 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
if (full_update)
|
||||
goto skip_sel_fetch_set_loop;
|
||||
|
||||
ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
intel_psr2_sel_fetch_pipe_alignment(crtc_state, &pipe_clip);
|
||||
|
||||
/*
|
||||
@ -1723,6 +1705,11 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
|
||||
if (!drm_rect_intersect(&inter, &new_plane_state->uapi.dst))
|
||||
continue;
|
||||
|
||||
if (!psr2_sel_fetch_plane_state_supported(new_plane_state)) {
|
||||
full_update = true;
|
||||
break;
|
||||
}
|
||||
|
||||
sel_fetch_area = &new_plane_state->psr2_sel_fetch_area;
|
||||
sel_fetch_area->y1 = inter.y1 - new_plane_state->uapi.dst.y1;
|
||||
sel_fetch_area->y2 = inter.y2 - new_plane_state->uapi.dst.y1;
|
||||
@ -1734,58 +1721,92 @@ skip_sel_fetch_set_loop:
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_psr_update - Update PSR state
|
||||
* @intel_dp: Intel DP
|
||||
* @crtc_state: new CRTC state
|
||||
* @conn_state: new CONNECTOR state
|
||||
*
|
||||
* This functions will update PSR states, disabling, enabling or switching PSR
|
||||
* version when executing fastsets. For full modeset, intel_psr_disable() and
|
||||
* intel_psr_enable() should be called instead.
|
||||
*/
|
||||
void intel_psr_update(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state)
|
||||
static void _intel_psr_pre_plane_update(const struct intel_atomic_state *state,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
struct intel_psr *psr = &intel_dp->psr;
|
||||
bool enable, psr2_enable;
|
||||
struct intel_encoder *encoder;
|
||||
|
||||
if (!CAN_PSR(intel_dp))
|
||||
for_each_intel_encoder_mask_with_psr(state->base.dev, encoder,
|
||||
crtc_state->uapi.encoder_mask) {
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct intel_psr *psr = &intel_dp->psr;
|
||||
bool needs_to_disable = false;
|
||||
|
||||
mutex_lock(&psr->lock);
|
||||
|
||||
/*
|
||||
* Reasons to disable:
|
||||
* - PSR disabled in new state
|
||||
* - All planes will go inactive
|
||||
* - Changing between PSR versions
|
||||
*/
|
||||
needs_to_disable |= !crtc_state->has_psr;
|
||||
needs_to_disable |= !crtc_state->active_planes;
|
||||
needs_to_disable |= crtc_state->has_psr2 != psr->psr2_enabled;
|
||||
|
||||
if (psr->enabled && needs_to_disable)
|
||||
intel_psr_disable_locked(intel_dp);
|
||||
|
||||
mutex_unlock(&psr->lock);
|
||||
}
|
||||
}
|
||||
|
||||
void intel_psr_pre_plane_update(const struct intel_atomic_state *state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
struct intel_crtc_state *crtc_state;
|
||||
struct intel_crtc *crtc;
|
||||
int i;
|
||||
|
||||
if (!HAS_PSR(dev_priv))
|
||||
return;
|
||||
|
||||
mutex_lock(&intel_dp->psr.lock);
|
||||
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i)
|
||||
_intel_psr_pre_plane_update(state, crtc_state);
|
||||
}
|
||||
|
||||
enable = crtc_state->has_psr;
|
||||
psr2_enable = crtc_state->has_psr2;
|
||||
static void _intel_psr_post_plane_update(const struct intel_atomic_state *state,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
struct intel_encoder *encoder;
|
||||
|
||||
if (!crtc_state->has_psr)
|
||||
return;
|
||||
|
||||
for_each_intel_encoder_mask_with_psr(state->base.dev, encoder,
|
||||
crtc_state->uapi.encoder_mask) {
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct intel_psr *psr = &intel_dp->psr;
|
||||
|
||||
mutex_lock(&psr->lock);
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm, psr->enabled && !crtc_state->active_planes);
|
||||
|
||||
/* Only enable if there is active planes */
|
||||
if (!psr->enabled && crtc_state->active_planes)
|
||||
intel_psr_enable_locked(intel_dp, crtc_state);
|
||||
|
||||
if (enable == psr->enabled && psr2_enable == psr->psr2_enabled &&
|
||||
crtc_state->enable_psr2_sel_fetch == psr->psr2_sel_fetch_enabled) {
|
||||
/* Force a PSR exit when enabling CRC to avoid CRC timeouts */
|
||||
if (crtc_state->crc_enabled && psr->enabled)
|
||||
psr_force_hw_tracking_exit(intel_dp);
|
||||
else if (DISPLAY_VER(dev_priv) < 9 && psr->enabled) {
|
||||
/*
|
||||
* Activate PSR again after a force exit when enabling
|
||||
* CRC in older gens
|
||||
*/
|
||||
if (!intel_dp->psr.active &&
|
||||
!intel_dp->psr.busy_frontbuffer_bits)
|
||||
schedule_work(&intel_dp->psr.work);
|
||||
}
|
||||
|
||||
goto unlock;
|
||||
mutex_unlock(&psr->lock);
|
||||
}
|
||||
}
|
||||
|
||||
if (psr->enabled)
|
||||
intel_psr_disable_locked(intel_dp);
|
||||
void intel_psr_post_plane_update(const struct intel_atomic_state *state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
struct intel_crtc_state *crtc_state;
|
||||
struct intel_crtc *crtc;
|
||||
int i;
|
||||
|
||||
if (enable)
|
||||
intel_psr_enable_locked(intel_dp, crtc_state, conn_state);
|
||||
if (!HAS_PSR(dev_priv))
|
||||
return;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&intel_dp->psr.lock);
|
||||
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i)
|
||||
_intel_psr_post_plane_update(state, crtc_state);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -2065,20 +2086,16 @@ void intel_psr_invalidate(struct drm_i915_private *dev_priv,
|
||||
/*
|
||||
* When we will be completely rely on PSR2 S/W tracking in future,
|
||||
* intel_psr_flush() will invalidate and flush the PSR for ORIGIN_FLIP
|
||||
* event also therefore tgl_dc3co_flush() require to be changed
|
||||
* event also therefore tgl_dc3co_flush_locked() require to be changed
|
||||
* accordingly in future.
|
||||
*/
|
||||
static void
|
||||
tgl_dc3co_flush(struct intel_dp *intel_dp, unsigned int frontbuffer_bits,
|
||||
enum fb_op_origin origin)
|
||||
tgl_dc3co_flush_locked(struct intel_dp *intel_dp, unsigned int frontbuffer_bits,
|
||||
enum fb_op_origin origin)
|
||||
{
|
||||
mutex_lock(&intel_dp->psr.lock);
|
||||
|
||||
if (!intel_dp->psr.dc3co_exitline)
|
||||
goto unlock;
|
||||
|
||||
if (!intel_dp->psr.psr2_enabled || !intel_dp->psr.active)
|
||||
goto unlock;
|
||||
if (!intel_dp->psr.dc3co_exitline || !intel_dp->psr.psr2_enabled ||
|
||||
!intel_dp->psr.active)
|
||||
return;
|
||||
|
||||
/*
|
||||
* At every frontbuffer flush flip event modified delay of delayed work,
|
||||
@ -2086,14 +2103,11 @@ tgl_dc3co_flush(struct intel_dp *intel_dp, unsigned int frontbuffer_bits,
|
||||
*/
|
||||
if (!(frontbuffer_bits &
|
||||
INTEL_FRONTBUFFER_ALL_MASK(intel_dp->psr.pipe)))
|
||||
goto unlock;
|
||||
return;
|
||||
|
||||
tgl_psr2_enable_dc3co(intel_dp);
|
||||
mod_delayed_work(system_wq, &intel_dp->psr.dc3co_work,
|
||||
intel_dp->psr.dc3co_exit_delay);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&intel_dp->psr.lock);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -2118,11 +2132,6 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
|
||||
unsigned int pipe_frontbuffer_bits = frontbuffer_bits;
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
|
||||
if (origin == ORIGIN_FLIP) {
|
||||
tgl_dc3co_flush(intel_dp, frontbuffer_bits, origin);
|
||||
continue;
|
||||
}
|
||||
|
||||
mutex_lock(&intel_dp->psr.lock);
|
||||
if (!intel_dp->psr.enabled) {
|
||||
mutex_unlock(&intel_dp->psr.lock);
|
||||
@ -2143,6 +2152,14 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
|
||||
continue;
|
||||
}
|
||||
|
||||
if (origin == ORIGIN_FLIP ||
|
||||
(origin == ORIGIN_CURSOR_UPDATE &&
|
||||
!intel_dp->psr.psr2_sel_fetch_enabled)) {
|
||||
tgl_dc3co_flush_locked(intel_dp, frontbuffer_bits, origin);
|
||||
mutex_unlock(&intel_dp->psr.lock);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* By definition flush = invalidate + flush */
|
||||
if (pipe_frontbuffer_bits)
|
||||
psr_force_hw_tracking_exit(intel_dp);
|
||||
@ -2186,23 +2203,12 @@ void intel_psr_init(struct intel_dp *intel_dp)
|
||||
|
||||
intel_dp->psr.source_support = true;
|
||||
|
||||
if (IS_HASWELL(dev_priv))
|
||||
/*
|
||||
* HSW don't have PSR registers on the same space as transcoder
|
||||
* so set this to a value that when subtract to the register
|
||||
* in transcoder space results in the right offset for HSW
|
||||
*/
|
||||
dev_priv->hsw_psr_mmio_adjust = _SRD_CTL_EDP - _HSW_EDP_PSR_BASE;
|
||||
|
||||
if (dev_priv->params.enable_psr == -1)
|
||||
if (DISPLAY_VER(dev_priv) < 9 || !dev_priv->vbt.psr.enable)
|
||||
if (!dev_priv->vbt.psr.enable)
|
||||
dev_priv->params.enable_psr = 0;
|
||||
|
||||
/* Set link_standby x link_off defaults */
|
||||
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
|
||||
/* HSW and BDW require workarounds that we don't implement. */
|
||||
intel_dp->psr.link_standby = false;
|
||||
else if (DISPLAY_VER(dev_priv) < 12)
|
||||
if (DISPLAY_VER(dev_priv) < 12)
|
||||
/* For new platforms up to TGL let's respect VBT back again */
|
||||
intel_dp->psr.link_standby = dev_priv->vbt.psr.full_link;
|
||||
|
||||
|
@ -20,14 +20,10 @@ struct intel_plane;
|
||||
struct intel_encoder;
|
||||
|
||||
void intel_psr_init_dpcd(struct intel_dp *intel_dp);
|
||||
void intel_psr_enable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void intel_psr_pre_plane_update(const struct intel_atomic_state *state);
|
||||
void intel_psr_post_plane_update(const struct intel_atomic_state *state);
|
||||
void intel_psr_disable(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *old_crtc_state);
|
||||
void intel_psr_update(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
int intel_psr_debug_set(struct intel_dp *intel_dp, u64 value);
|
||||
void intel_psr_invalidate(struct drm_i915_private *dev_priv,
|
||||
unsigned frontbuffer_bits,
|
||||
@ -37,7 +33,8 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
|
||||
enum fb_op_origin origin);
|
||||
void intel_psr_init(struct intel_dp *intel_dp);
|
||||
void intel_psr_compute_config(struct intel_dp *intel_dp,
|
||||
struct intel_crtc_state *crtc_state);
|
||||
struct intel_crtc_state *crtc_state,
|
||||
struct drm_connector_state *conn_state);
|
||||
void intel_psr_get_config(struct intel_encoder *encoder,
|
||||
struct intel_crtc_state *pipe_config);
|
||||
void intel_psr_irq_handler(struct intel_dp *intel_dp, u32 psr_iir);
|
||||
@ -51,6 +48,8 @@ void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
int color_plane);
|
||||
void intel_psr2_disable_plane_sel_fetch(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void intel_psr_pause(struct intel_dp *intel_dp);
|
||||
void intel_psr_resume(struct intel_dp *intel_dp);
|
||||
|
||||
|
@ -1335,6 +1335,13 @@ static int intel_sdvo_compute_config(struct intel_encoder *encoder,
|
||||
adjusted_mode);
|
||||
pipe_config->sdvo_tv_clock = true;
|
||||
} else if (IS_LVDS(intel_sdvo_connector)) {
|
||||
int ret;
|
||||
|
||||
ret = intel_panel_compute_config(&intel_sdvo_connector->base,
|
||||
adjusted_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!intel_sdvo_set_output_timings_from_mode(intel_sdvo,
|
||||
intel_sdvo_connector->base.panel.fixed_mode))
|
||||
return -EINVAL;
|
||||
@ -1873,7 +1880,6 @@ intel_sdvo_mode_valid(struct drm_connector *connector,
|
||||
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
return MODE_NO_DBLESCAN;
|
||||
|
||||
|
||||
if (clock > max_dotclk)
|
||||
return MODE_CLOCK_HIGH;
|
||||
|
||||
@ -1890,14 +1896,11 @@ intel_sdvo_mode_valid(struct drm_connector *connector,
|
||||
return MODE_CLOCK_HIGH;
|
||||
|
||||
if (IS_LVDS(intel_sdvo_connector)) {
|
||||
const struct drm_display_mode *fixed_mode =
|
||||
intel_sdvo_connector->base.panel.fixed_mode;
|
||||
enum drm_mode_status status;
|
||||
|
||||
if (mode->hdisplay > fixed_mode->hdisplay)
|
||||
return MODE_PANEL;
|
||||
|
||||
if (mode->vdisplay > fixed_mode->vdisplay)
|
||||
return MODE_PANEL;
|
||||
status = intel_panel_mode_valid(&intel_sdvo_connector->base, mode);
|
||||
if (status != MODE_OK)
|
||||
return status;
|
||||
}
|
||||
|
||||
return MODE_OK;
|
||||
|
@ -5,6 +5,8 @@
|
||||
|
||||
#include <linux/util_macros.h>
|
||||
|
||||
#include "intel_ddi.h"
|
||||
#include "intel_ddi_buf_trans.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_snps_phy.h"
|
||||
@ -50,58 +52,28 @@ void intel_snps_phy_update_psr_power_state(struct drm_i915_private *dev_priv,
|
||||
SNPS_PHY_TX_REQ_LN_DIS_PWR_STATE_PSR, val);
|
||||
}
|
||||
|
||||
static const u32 dg2_ddi_translations[] = {
|
||||
/* VS 0, pre-emph 0 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 26),
|
||||
|
||||
/* VS 0, pre-emph 1 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 33) |
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, 6),
|
||||
|
||||
/* VS 0, pre-emph 2 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 38) |
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, 12),
|
||||
|
||||
/* VS 0, pre-emph 3 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 43) |
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, 19),
|
||||
|
||||
/* VS 1, pre-emph 0 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 39),
|
||||
|
||||
/* VS 1, pre-emph 1 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 44) |
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, 8),
|
||||
|
||||
/* VS 1, pre-emph 2 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 47) |
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, 15),
|
||||
|
||||
/* VS 2, pre-emph 0 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 52),
|
||||
|
||||
/* VS 2, pre-emph 1 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 51) |
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, 10),
|
||||
|
||||
/* VS 3, pre-emph 0 */
|
||||
REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, 62),
|
||||
};
|
||||
|
||||
void intel_snps_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
u32 level)
|
||||
void intel_snps_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
const struct intel_ddi_buf_trans *trans;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
int level = intel_ddi_level(encoder, crtc_state, 0);
|
||||
int n_entries, ln;
|
||||
|
||||
n_entries = ARRAY_SIZE(dg2_ddi_translations);
|
||||
if (level >= n_entries)
|
||||
level = n_entries - 1;
|
||||
trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries);
|
||||
if (drm_WARN_ON_ONCE(&dev_priv->drm, !trans))
|
||||
return;
|
||||
|
||||
for (ln = 0; ln < 4; ln++)
|
||||
intel_de_write(dev_priv, SNPS_PHY_TX_EQ(ln, phy),
|
||||
dg2_ddi_translations[level]);
|
||||
for (ln = 0; ln < 4; ln++) {
|
||||
u32 val = 0;
|
||||
|
||||
val |= REG_FIELD_PREP(SNPS_PHY_TX_EQ_MAIN, trans->entries[level].snps.snps_vswing);
|
||||
val |= REG_FIELD_PREP(SNPS_PHY_TX_EQ_PRE, trans->entries[level].snps.snps_pre_cursor);
|
||||
val |= REG_FIELD_PREP(SNPS_PHY_TX_EQ_POST, trans->entries[level].snps.snps_post_cursor);
|
||||
|
||||
intel_de_write(dev_priv, SNPS_PHY_TX_EQ(ln, phy), val);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@ -198,11 +170,81 @@ static const struct intel_mpllb_state dg2_dp_hbr3_100 = {
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state *dg2_dp_100_tables[] = {
|
||||
static const struct intel_mpllb_state dg2_dp_uhbr10_100 = {
|
||||
.clock = 1000000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 3),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 4) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 21) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_MULTIPLIER, 8) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_WORD_DIV2_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DP2_MODE, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 368),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
|
||||
/*
|
||||
* SSC will be enabled, DP UHBR has a minimum SSC requirement.
|
||||
*/
|
||||
.mpllb_sscen =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_PEAK, 58982),
|
||||
.mpllb_sscstep =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 76101),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_uhbr13_100 = {
|
||||
.clock = 1350000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 3),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 5) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 45) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_MULTIPLIER, 8) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_WORD_DIV2_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DP2_MODE, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 3),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 508),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
|
||||
/*
|
||||
* SSC will be enabled, DP UHBR has a minimum SSC requirement.
|
||||
*/
|
||||
.mpllb_sscen =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_PEAK, 79626),
|
||||
.mpllb_sscstep =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 102737),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state * const dg2_dp_100_tables[] = {
|
||||
&dg2_dp_rbr_100,
|
||||
&dg2_dp_hbr1_100,
|
||||
&dg2_dp_hbr2_100,
|
||||
&dg2_dp_hbr3_100,
|
||||
&dg2_dp_uhbr10_100,
|
||||
&dg2_dp_uhbr13_100,
|
||||
NULL,
|
||||
};
|
||||
|
||||
@ -311,11 +353,88 @@ static const struct intel_mpllb_state dg2_dp_hbr3_38_4 = {
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 61440),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state *dg2_dp_38_4_tables[] = {
|
||||
static const struct intel_mpllb_state dg2_dp_uhbr10_38_4 = {
|
||||
.clock = 1000000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 5) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 26) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_MULTIPLIER, 8) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_WORD_DIV2_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DP2_MODE, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 488),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 3),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_REM, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 27306),
|
||||
|
||||
/*
|
||||
* SSC will be enabled, DP UHBR has a minimum SSC requirement.
|
||||
*/
|
||||
.mpllb_sscen =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_PEAK, 76800),
|
||||
.mpllb_sscstep =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 129024),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_uhbr13_38_4 = {
|
||||
.clock = 1350000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 6) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 56) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_MULTIPLIER, 8) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_WORD_DIV2_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DP2_MODE, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 3),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 670),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 36864),
|
||||
|
||||
/*
|
||||
* SSC will be enabled, DP UHBR has a minimum SSC requirement.
|
||||
*/
|
||||
.mpllb_sscen =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_PEAK, 103680),
|
||||
.mpllb_sscstep =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 174182),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state * const dg2_dp_38_4_tables[] = {
|
||||
&dg2_dp_rbr_38_4,
|
||||
&dg2_dp_hbr1_38_4,
|
||||
&dg2_dp_hbr2_38_4,
|
||||
&dg2_dp_hbr3_38_4,
|
||||
&dg2_dp_uhbr10_38_4,
|
||||
&dg2_dp_uhbr13_38_4,
|
||||
NULL,
|
||||
};
|
||||
|
||||
@ -448,7 +567,7 @@ static const struct intel_mpllb_state dg2_edp_r432 = {
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 65752),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state *dg2_edp_tables[] = {
|
||||
static const struct intel_mpllb_state * const dg2_edp_tables[] = {
|
||||
&dg2_dp_rbr_100,
|
||||
&dg2_edp_r216,
|
||||
&dg2_edp_r243,
|
||||
@ -611,7 +730,7 @@ static const struct intel_mpllb_state dg2_hdmi_594 = {
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_UP_SPREAD, 1),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state *dg2_hdmi_tables[] = {
|
||||
static const struct intel_mpllb_state * const dg2_hdmi_tables[] = {
|
||||
&dg2_hdmi_25_175,
|
||||
&dg2_hdmi_27_0,
|
||||
&dg2_hdmi_74_25,
|
||||
@ -620,7 +739,7 @@ static const struct intel_mpllb_state *dg2_hdmi_tables[] = {
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state **
|
||||
static const struct intel_mpllb_state * const *
|
||||
intel_mpllb_tables_get(struct intel_crtc_state *crtc_state,
|
||||
struct intel_encoder *encoder)
|
||||
{
|
||||
@ -654,7 +773,7 @@ intel_mpllb_tables_get(struct intel_crtc_state *crtc_state,
|
||||
int intel_mpllb_calc_state(struct intel_crtc_state *crtc_state,
|
||||
struct intel_encoder *encoder)
|
||||
{
|
||||
const struct intel_mpllb_state **tables;
|
||||
const struct intel_mpllb_state * const *tables;
|
||||
int i;
|
||||
|
||||
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) {
|
||||
@ -850,7 +969,7 @@ void intel_mpllb_readout_hw_state(struct intel_encoder *encoder,
|
||||
|
||||
int intel_snps_phy_check_hdmi_link_rate(int clock)
|
||||
{
|
||||
const struct intel_mpllb_state **tables = dg2_hdmi_tables;
|
||||
const struct intel_mpllb_state * const *tables = dg2_hdmi_tables;
|
||||
int i;
|
||||
|
||||
for (i = 0; tables[i]; i++) {
|
||||
|
@ -29,7 +29,7 @@ int intel_mpllb_calc_port_clock(struct intel_encoder *encoder,
|
||||
const struct intel_mpllb_state *pll_state);
|
||||
|
||||
int intel_snps_phy_check_hdmi_link_rate(int clock);
|
||||
void intel_snps_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
u32 level);
|
||||
void intel_snps_phy_set_signal_levels(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
#endif /* __INTEL_SNPS_PHY_H__ */
|
||||
|
@ -12,44 +12,81 @@
|
||||
static const char *tc_port_mode_name(enum tc_port_mode mode)
|
||||
{
|
||||
static const char * const names[] = {
|
||||
[TC_PORT_DISCONNECTED] = "disconnected",
|
||||
[TC_PORT_TBT_ALT] = "tbt-alt",
|
||||
[TC_PORT_DP_ALT] = "dp-alt",
|
||||
[TC_PORT_LEGACY] = "legacy",
|
||||
};
|
||||
|
||||
if (WARN_ON(mode >= ARRAY_SIZE(names)))
|
||||
mode = TC_PORT_TBT_ALT;
|
||||
mode = TC_PORT_DISCONNECTED;
|
||||
|
||||
return names[mode];
|
||||
}
|
||||
|
||||
static enum intel_display_power_domain
|
||||
tc_cold_get_power_domain(struct intel_digital_port *dig_port)
|
||||
static bool intel_tc_port_in_mode(struct intel_digital_port *dig_port,
|
||||
enum tc_port_mode mode)
|
||||
{
|
||||
if (intel_tc_cold_requires_aux_pw(dig_port))
|
||||
return intel_legacy_aux_to_power_domain(dig_port->aux_ch);
|
||||
else
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum phy phy = intel_port_to_phy(i915, dig_port->base.port);
|
||||
|
||||
return intel_phy_is_tc(i915, phy) && dig_port->tc_mode == mode;
|
||||
}
|
||||
|
||||
bool intel_tc_port_in_tbt_alt_mode(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return intel_tc_port_in_mode(dig_port, TC_PORT_TBT_ALT);
|
||||
}
|
||||
|
||||
bool intel_tc_port_in_dp_alt_mode(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return intel_tc_port_in_mode(dig_port, TC_PORT_DP_ALT);
|
||||
}
|
||||
|
||||
bool intel_tc_port_in_legacy_mode(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return intel_tc_port_in_mode(dig_port, TC_PORT_LEGACY);
|
||||
}
|
||||
|
||||
bool intel_tc_cold_requires_aux_pw(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
|
||||
return (DISPLAY_VER(i915) == 11 && dig_port->tc_legacy_port) ||
|
||||
IS_ALDERLAKE_P(i915);
|
||||
}
|
||||
|
||||
static enum intel_display_power_domain
|
||||
tc_cold_get_power_domain(struct intel_digital_port *dig_port, enum tc_port_mode mode)
|
||||
{
|
||||
if (mode == TC_PORT_TBT_ALT || !intel_tc_cold_requires_aux_pw(dig_port))
|
||||
return POWER_DOMAIN_TC_COLD_OFF;
|
||||
|
||||
return intel_legacy_aux_to_power_domain(dig_port->aux_ch);
|
||||
}
|
||||
|
||||
static intel_wakeref_t
|
||||
tc_cold_block(struct intel_digital_port *dig_port)
|
||||
tc_cold_block_in_mode(struct intel_digital_port *dig_port, enum tc_port_mode mode,
|
||||
enum intel_display_power_domain *domain)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum intel_display_power_domain domain;
|
||||
|
||||
if (DISPLAY_VER(i915) == 11 && !dig_port->tc_legacy_port)
|
||||
return 0;
|
||||
*domain = tc_cold_get_power_domain(dig_port, mode);
|
||||
|
||||
domain = tc_cold_get_power_domain(dig_port);
|
||||
return intel_display_power_get(i915, domain);
|
||||
return intel_display_power_get(i915, *domain);
|
||||
}
|
||||
|
||||
static intel_wakeref_t
|
||||
tc_cold_block(struct intel_digital_port *dig_port, enum intel_display_power_domain *domain)
|
||||
{
|
||||
return tc_cold_block_in_mode(dig_port, dig_port->tc_mode, domain);
|
||||
}
|
||||
|
||||
static void
|
||||
tc_cold_unblock(struct intel_digital_port *dig_port, intel_wakeref_t wakeref)
|
||||
tc_cold_unblock(struct intel_digital_port *dig_port, enum intel_display_power_domain domain,
|
||||
intel_wakeref_t wakeref)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum intel_display_power_domain domain;
|
||||
|
||||
/*
|
||||
* wakeref == -1, means some error happened saving save_depot_stack but
|
||||
@ -59,8 +96,7 @@ tc_cold_unblock(struct intel_digital_port *dig_port, intel_wakeref_t wakeref)
|
||||
if (wakeref == 0)
|
||||
return;
|
||||
|
||||
domain = tc_cold_get_power_domain(dig_port);
|
||||
intel_display_power_put_async(i915, domain, wakeref);
|
||||
intel_display_power_put(i915, domain, wakeref);
|
||||
}
|
||||
|
||||
static void
|
||||
@ -69,11 +105,9 @@ assert_tc_cold_blocked(struct intel_digital_port *dig_port)
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
bool enabled;
|
||||
|
||||
if (DISPLAY_VER(i915) == 11 && !dig_port->tc_legacy_port)
|
||||
return;
|
||||
|
||||
enabled = intel_display_power_is_enabled(i915,
|
||||
tc_cold_get_power_domain(dig_port));
|
||||
tc_cold_get_power_domain(dig_port,
|
||||
dig_port->tc_mode));
|
||||
drm_WARN_ON(&i915->drm, !enabled);
|
||||
}
|
||||
|
||||
@ -244,6 +278,11 @@ static u32 adl_tc_port_live_status_mask(struct intel_digital_port *dig_port)
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 val, mask = 0;
|
||||
|
||||
/*
|
||||
* On ADL-P HW/FW will wake from TCCOLD to complete the read access of
|
||||
* registers in IOM. Note that this doesn't apply to PHY and FIA
|
||||
* registers.
|
||||
*/
|
||||
val = intel_uncore_read(uncore, TCSS_DDI_STATUS(tc_port));
|
||||
if (val & TCSS_DDI_STATUS_HPD_LIVE_STATUS_ALT)
|
||||
mask |= BIT(TC_PORT_DP_ALT);
|
||||
@ -270,6 +309,14 @@ static u32 tc_port_live_status_mask(struct intel_digital_port *dig_port)
|
||||
return icl_tc_port_live_status_mask(dig_port);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the PHY status complete flag indicating that display can acquire the
|
||||
* PHY ownership. The IOM firmware sets this flag when a DP-alt or legacy sink
|
||||
* is connected and it's ready to switch the ownership to display. The flag
|
||||
* will be left cleared when a TBT-alt sink is connected, where the PHY is
|
||||
* owned by the TBT subsystem and so switching the ownership to display is not
|
||||
* required.
|
||||
*/
|
||||
static bool icl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
@ -288,6 +335,13 @@ static bool icl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
||||
return val & DP_PHY_MODE_STATUS_COMPLETED(dig_port->tc_phy_fia_idx);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the PHY status complete flag indicating that display can acquire the
|
||||
* PHY ownership. The IOM firmware sets this flag when it's ready to switch
|
||||
* the ownership to display, regardless of what sink is connected (TBT-alt,
|
||||
* DP-alt, legacy or nothing). For TBT-alt sinks the PHY is owned by the TBT
|
||||
* subsystem and so switching the ownership to display is not required.
|
||||
*/
|
||||
static bool adl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
@ -339,11 +393,6 @@ static bool icl_tc_phy_take_ownership(struct intel_digital_port *dig_port,
|
||||
intel_uncore_write(uncore,
|
||||
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia), val);
|
||||
|
||||
if (!take && wait_for(!tc_phy_status_complete(dig_port), 10))
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %s: PHY complete clear timed out\n",
|
||||
dig_port->tc_port_name);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -429,6 +478,7 @@ static void icl_tc_phy_connect(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
u32 live_status_mask;
|
||||
int max_lanes;
|
||||
|
||||
if (!tc_phy_status_complete(dig_port)) {
|
||||
@ -437,6 +487,13 @@ static void icl_tc_phy_connect(struct intel_digital_port *dig_port,
|
||||
goto out_set_tbt_alt_mode;
|
||||
}
|
||||
|
||||
live_status_mask = tc_port_live_status_mask(dig_port);
|
||||
if (!(live_status_mask & (BIT(TC_PORT_DP_ALT) | BIT(TC_PORT_LEGACY)))) {
|
||||
drm_dbg_kms(&i915->drm, "Port %s: PHY ownership not required (live status %02x)\n",
|
||||
dig_port->tc_port_name, live_status_mask);
|
||||
goto out_set_tbt_alt_mode;
|
||||
}
|
||||
|
||||
if (!tc_phy_take_ownership(dig_port, true) &&
|
||||
!drm_WARN_ON(&i915->drm, dig_port->tc_legacy_port))
|
||||
goto out_set_tbt_alt_mode;
|
||||
@ -485,14 +542,13 @@ static void icl_tc_phy_disconnect(struct intel_digital_port *dig_port)
|
||||
{
|
||||
switch (dig_port->tc_mode) {
|
||||
case TC_PORT_LEGACY:
|
||||
/* Nothing to do, we never disconnect from legacy mode */
|
||||
break;
|
||||
case TC_PORT_DP_ALT:
|
||||
tc_phy_take_ownership(dig_port, false);
|
||||
dig_port->tc_mode = TC_PORT_TBT_ALT;
|
||||
break;
|
||||
fallthrough;
|
||||
case TC_PORT_TBT_ALT:
|
||||
/* Nothing to do, we stay in TBT-alt mode */
|
||||
dig_port->tc_mode = TC_PORT_DISCONNECTED;
|
||||
fallthrough;
|
||||
case TC_PORT_DISCONNECTED:
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(dig_port->tc_mode);
|
||||
@ -509,6 +565,10 @@ static bool icl_tc_phy_is_connected(struct intel_digital_port *dig_port)
|
||||
return dig_port->tc_mode == TC_PORT_TBT_ALT;
|
||||
}
|
||||
|
||||
/* On ADL-P the PHY complete flag is set in TBT mode as well. */
|
||||
if (IS_ALDERLAKE_P(i915) && dig_port->tc_mode == TC_PORT_TBT_ALT)
|
||||
return true;
|
||||
|
||||
if (!tc_phy_is_owned(dig_port)) {
|
||||
drm_dbg_kms(&i915->drm, "Port %s: PHY not owned\n",
|
||||
dig_port->tc_port_name);
|
||||
@ -550,9 +610,7 @@ intel_tc_port_get_target_mode(struct intel_digital_port *dig_port)
|
||||
if (live_status_mask)
|
||||
return fls(live_status_mask) - 1;
|
||||
|
||||
return tc_phy_status_complete(dig_port) &&
|
||||
dig_port->tc_legacy_port ? TC_PORT_LEGACY :
|
||||
TC_PORT_TBT_ALT;
|
||||
return TC_PORT_TBT_ALT;
|
||||
}
|
||||
|
||||
static void intel_tc_port_reset_mode(struct intel_digital_port *dig_port,
|
||||
@ -581,6 +639,43 @@ static void intel_tc_port_reset_mode(struct intel_digital_port *dig_port,
|
||||
tc_port_mode_name(dig_port->tc_mode));
|
||||
}
|
||||
|
||||
static bool intel_tc_port_needs_reset(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return intel_tc_port_get_target_mode(dig_port) != dig_port->tc_mode;
|
||||
}
|
||||
|
||||
static void intel_tc_port_update_mode(struct intel_digital_port *dig_port,
|
||||
int required_lanes, bool force_disconnect)
|
||||
{
|
||||
enum intel_display_power_domain domain;
|
||||
intel_wakeref_t wref;
|
||||
bool needs_reset = force_disconnect;
|
||||
|
||||
if (!needs_reset) {
|
||||
/* Get power domain required to check the hotplug live status. */
|
||||
wref = tc_cold_block(dig_port, &domain);
|
||||
needs_reset = intel_tc_port_needs_reset(dig_port);
|
||||
tc_cold_unblock(dig_port, domain, wref);
|
||||
}
|
||||
|
||||
if (!needs_reset)
|
||||
return;
|
||||
|
||||
/* Get power domain required for resetting the mode. */
|
||||
wref = tc_cold_block_in_mode(dig_port, TC_PORT_DISCONNECTED, &domain);
|
||||
|
||||
intel_tc_port_reset_mode(dig_port, required_lanes, force_disconnect);
|
||||
|
||||
/* Get power domain matching the new mode after reset. */
|
||||
tc_cold_unblock(dig_port, dig_port->tc_lock_power_domain,
|
||||
fetch_and_zero(&dig_port->tc_lock_wakeref));
|
||||
if (dig_port->tc_mode != TC_PORT_DISCONNECTED)
|
||||
dig_port->tc_lock_wakeref = tc_cold_block(dig_port,
|
||||
&dig_port->tc_lock_power_domain);
|
||||
|
||||
tc_cold_unblock(dig_port, domain, wref);
|
||||
}
|
||||
|
||||
static void
|
||||
intel_tc_port_link_init_refcount(struct intel_digital_port *dig_port,
|
||||
int refcount)
|
||||
@ -595,45 +690,42 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
struct intel_encoder *encoder = &dig_port->base;
|
||||
intel_wakeref_t tc_cold_wref;
|
||||
int active_links = 0;
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
tc_cold_wref = tc_cold_block(dig_port);
|
||||
|
||||
dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
|
||||
if (dig_port->dp.is_mst)
|
||||
active_links = intel_dp_mst_encoder_active_links(dig_port);
|
||||
else if (encoder->base.crtc)
|
||||
active_links = to_intel_crtc(encoder->base.crtc)->active;
|
||||
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_mode != TC_PORT_DISCONNECTED);
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_lock_wakeref);
|
||||
if (active_links) {
|
||||
enum intel_display_power_domain domain;
|
||||
intel_wakeref_t tc_cold_wref = tc_cold_block(dig_port, &domain);
|
||||
|
||||
dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
|
||||
|
||||
if (!icl_tc_phy_is_connected(dig_port))
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %s: PHY disconnected with %d active link(s)\n",
|
||||
dig_port->tc_port_name, active_links);
|
||||
intel_tc_port_link_init_refcount(dig_port, active_links);
|
||||
|
||||
goto out;
|
||||
dig_port->tc_lock_wakeref = tc_cold_block(dig_port,
|
||||
&dig_port->tc_lock_power_domain);
|
||||
|
||||
tc_cold_unblock(dig_port, domain, tc_cold_wref);
|
||||
}
|
||||
|
||||
if (dig_port->tc_legacy_port)
|
||||
icl_tc_phy_connect(dig_port, 1);
|
||||
|
||||
out:
|
||||
drm_dbg_kms(&i915->drm, "Port %s: sanitize mode (%s)\n",
|
||||
dig_port->tc_port_name,
|
||||
tc_port_mode_name(dig_port->tc_mode));
|
||||
|
||||
tc_cold_unblock(dig_port, tc_cold_wref);
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
}
|
||||
|
||||
static bool intel_tc_port_needs_reset(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return intel_tc_port_get_target_mode(dig_port) != dig_port->tc_mode;
|
||||
}
|
||||
|
||||
/*
|
||||
* The type-C ports are different because even when they are connected, they may
|
||||
* not be available/usable by the graphics driver: see the comment on
|
||||
@ -648,78 +740,79 @@ bool intel_tc_port_connected(struct intel_encoder *encoder)
|
||||
{
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
bool is_connected;
|
||||
intel_wakeref_t tc_cold_wref;
|
||||
|
||||
intel_tc_port_lock(dig_port);
|
||||
tc_cold_wref = tc_cold_block(dig_port);
|
||||
|
||||
is_connected = tc_port_live_status_mask(dig_port) &
|
||||
BIT(dig_port->tc_mode);
|
||||
|
||||
tc_cold_unblock(dig_port, tc_cold_wref);
|
||||
intel_tc_port_unlock(dig_port);
|
||||
|
||||
return is_connected;
|
||||
}
|
||||
|
||||
static void __intel_tc_port_lock(struct intel_digital_port *dig_port,
|
||||
int required_lanes, bool force_disconnect)
|
||||
int required_lanes)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
wakeref = intel_display_power_get(i915, POWER_DOMAIN_DISPLAY_CORE);
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
|
||||
if (!dig_port->tc_link_refcount) {
|
||||
intel_wakeref_t tc_cold_wref;
|
||||
cancel_delayed_work(&dig_port->tc_disconnect_phy_work);
|
||||
|
||||
tc_cold_wref = tc_cold_block(dig_port);
|
||||
if (!dig_port->tc_link_refcount)
|
||||
intel_tc_port_update_mode(dig_port, required_lanes,
|
||||
false);
|
||||
|
||||
if (force_disconnect || intel_tc_port_needs_reset(dig_port))
|
||||
intel_tc_port_reset_mode(dig_port, required_lanes,
|
||||
force_disconnect);
|
||||
|
||||
tc_cold_unblock(dig_port, tc_cold_wref);
|
||||
}
|
||||
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_lock_wakeref);
|
||||
dig_port->tc_lock_wakeref = wakeref;
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_mode == TC_PORT_DISCONNECTED);
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_mode != TC_PORT_TBT_ALT &&
|
||||
!tc_phy_is_owned(dig_port));
|
||||
}
|
||||
|
||||
void intel_tc_port_lock(struct intel_digital_port *dig_port)
|
||||
{
|
||||
__intel_tc_port_lock(dig_port, 1, false);
|
||||
__intel_tc_port_lock(dig_port, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_tc_port_disconnect_phy_work: disconnect TypeC PHY from display port
|
||||
* @dig_port: digital port
|
||||
*
|
||||
* Disconnect the given digital port from its TypeC PHY (handing back the
|
||||
* control of the PHY to the TypeC subsystem). This will happen in a delayed
|
||||
* manner after each aux transactions and modeset disables.
|
||||
*/
|
||||
static void intel_tc_port_disconnect_phy_work(struct work_struct *work)
|
||||
{
|
||||
struct intel_digital_port *dig_port =
|
||||
container_of(work, struct intel_digital_port, tc_disconnect_phy_work.work);
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
|
||||
if (!dig_port->tc_link_refcount)
|
||||
intel_tc_port_update_mode(dig_port, 1, true);
|
||||
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_tc_port_flush_work: flush the work disconnecting the PHY
|
||||
* @dig_port: digital port
|
||||
*
|
||||
* Flush the delayed work disconnecting an idle PHY.
|
||||
*/
|
||||
void intel_tc_port_flush_work(struct intel_digital_port *dig_port)
|
||||
{
|
||||
flush_delayed_work(&dig_port->tc_disconnect_phy_work);
|
||||
}
|
||||
|
||||
void intel_tc_port_unlock(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
intel_wakeref_t wakeref = fetch_and_zero(&dig_port->tc_lock_wakeref);
|
||||
if (!dig_port->tc_link_refcount && dig_port->tc_mode != TC_PORT_DISCONNECTED)
|
||||
queue_delayed_work(system_unbound_wq, &dig_port->tc_disconnect_phy_work,
|
||||
msecs_to_jiffies(1000));
|
||||
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
|
||||
intel_display_power_put_async(i915, POWER_DOMAIN_DISPLAY_CORE,
|
||||
wakeref);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_tc_port_disconnect_phy: disconnect TypeC PHY from display port
|
||||
* @dig_port: digital port
|
||||
*
|
||||
* Disconnect the given digital port from its TypeC PHY (handing back the
|
||||
* control of the PHY to the TypeC subsystem). The only purpose of this
|
||||
* function is to force the disconnect even with a TypeC display output still
|
||||
* plugged to the TypeC connector, which is required by the TypeC firmwares
|
||||
* during system suspend and shutdown. Otherwise - during the unplug event
|
||||
* handling - the PHY ownership is released automatically by
|
||||
* intel_tc_port_reset_mode(), when calling this function is not required.
|
||||
*/
|
||||
void intel_tc_port_disconnect_phy(struct intel_digital_port *dig_port)
|
||||
{
|
||||
__intel_tc_port_lock(dig_port, 1, true);
|
||||
intel_tc_port_unlock(dig_port);
|
||||
}
|
||||
|
||||
bool intel_tc_port_ref_held(struct intel_digital_port *dig_port)
|
||||
@ -731,21 +824,30 @@ bool intel_tc_port_ref_held(struct intel_digital_port *dig_port)
|
||||
void intel_tc_port_get_link(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
__intel_tc_port_lock(dig_port, required_lanes, false);
|
||||
__intel_tc_port_lock(dig_port, required_lanes);
|
||||
dig_port->tc_link_refcount++;
|
||||
intel_tc_port_unlock(dig_port);
|
||||
}
|
||||
|
||||
void intel_tc_port_put_link(struct intel_digital_port *dig_port)
|
||||
{
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
dig_port->tc_link_refcount--;
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
intel_tc_port_lock(dig_port);
|
||||
--dig_port->tc_link_refcount;
|
||||
intel_tc_port_unlock(dig_port);
|
||||
|
||||
/*
|
||||
* Disconnecting the PHY after the PHY's PLL gets disabled may
|
||||
* hang the system on ADL-P, so disconnect the PHY here synchronously.
|
||||
* TODO: remove this once the root cause of the ordering requirement
|
||||
* is found/fixed.
|
||||
*/
|
||||
intel_tc_port_flush_work(dig_port);
|
||||
}
|
||||
|
||||
static bool
|
||||
tc_has_modular_fia(struct drm_i915_private *i915, struct intel_digital_port *dig_port)
|
||||
{
|
||||
enum intel_display_power_domain domain;
|
||||
intel_wakeref_t wakeref;
|
||||
u32 val;
|
||||
|
||||
@ -753,9 +855,9 @@ tc_has_modular_fia(struct drm_i915_private *i915, struct intel_digital_port *dig
|
||||
return false;
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
wakeref = tc_cold_block(dig_port);
|
||||
wakeref = tc_cold_block(dig_port, &domain);
|
||||
val = intel_uncore_read(&i915->uncore, PORT_TX_DFLEXDPSP(FIA1));
|
||||
tc_cold_unblock(dig_port, wakeref);
|
||||
tc_cold_unblock(dig_port, domain, wakeref);
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
|
||||
drm_WARN_ON(&i915->drm, val == 0xffffffff);
|
||||
@ -795,15 +897,9 @@ void intel_tc_port_init(struct intel_digital_port *dig_port, bool is_legacy)
|
||||
"%c/TC#%d", port_name(port), tc_port + 1);
|
||||
|
||||
mutex_init(&dig_port->tc_lock);
|
||||
INIT_DELAYED_WORK(&dig_port->tc_disconnect_phy_work, intel_tc_port_disconnect_phy_work);
|
||||
dig_port->tc_legacy_port = is_legacy;
|
||||
dig_port->tc_mode = TC_PORT_DISCONNECTED;
|
||||
dig_port->tc_link_refcount = 0;
|
||||
tc_port_load_fia_params(i915, dig_port);
|
||||
}
|
||||
|
||||
bool intel_tc_cold_requires_aux_pw(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
|
||||
return (DISPLAY_VER(i915) == 11 && dig_port->tc_legacy_port) ||
|
||||
IS_ALDERLAKE_P(i915);
|
||||
}
|
||||
|
@ -12,8 +12,11 @@
|
||||
struct intel_digital_port;
|
||||
struct intel_encoder;
|
||||
|
||||
bool intel_tc_port_in_tbt_alt_mode(struct intel_digital_port *dig_port);
|
||||
bool intel_tc_port_in_dp_alt_mode(struct intel_digital_port *dig_port);
|
||||
bool intel_tc_port_in_legacy_mode(struct intel_digital_port *dig_port);
|
||||
|
||||
bool intel_tc_port_connected(struct intel_encoder *encoder);
|
||||
void intel_tc_port_disconnect_phy(struct intel_digital_port *dig_port);
|
||||
|
||||
u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port);
|
||||
u32 intel_tc_port_get_pin_assignment_mask(struct intel_digital_port *dig_port);
|
||||
@ -24,6 +27,7 @@ void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
|
||||
void intel_tc_port_sanitize(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_lock(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_unlock(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_flush_work(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_get_link(struct intel_digital_port *dig_port,
|
||||
int required_lanes);
|
||||
void intel_tc_port_put_link(struct intel_digital_port *dig_port);
|
||||
|
@ -1529,7 +1529,7 @@ static void intel_tv_pre_enable(struct intel_atomic_state *state,
|
||||
intel_de_write(dev_priv, TV_CLR_LEVEL,
|
||||
((video_levels->black << TV_BLACK_LEVEL_SHIFT) | (video_levels->blank << TV_BLANK_LEVEL_SHIFT)));
|
||||
|
||||
assert_pipe_disabled(dev_priv, pipe_config->cpu_transcoder);
|
||||
assert_transcoder_disabled(dev_priv, pipe_config->cpu_transcoder);
|
||||
|
||||
/* Filter ctl must be set before TV_WIN_SIZE */
|
||||
tv_filter_ctl = TV_AUTO_SCALE;
|
||||
|
@ -814,6 +814,11 @@ struct lfp_brightness_level {
|
||||
u16 reserved;
|
||||
} __packed;
|
||||
|
||||
#define EXP_BDB_LFP_BL_DATA_SIZE_REV_191 \
|
||||
offsetof(struct bdb_lfp_backlight_data, brightness_level)
|
||||
#define EXP_BDB_LFP_BL_DATA_SIZE_REV_234 \
|
||||
offsetof(struct bdb_lfp_backlight_data, brightness_precision_bits)
|
||||
|
||||
struct bdb_lfp_backlight_data {
|
||||
u8 entry_size;
|
||||
struct lfp_backlight_data_entry data[16];
|
||||
|
@ -357,11 +357,9 @@ bool intel_dsc_source_support(const struct intel_crtc_state *crtc_state)
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool is_pipe_dsc(const struct intel_crtc_state *crtc_state)
|
||||
static bool is_pipe_dsc(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
|
||||
{
|
||||
const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
const struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
|
||||
if (DISPLAY_VER(i915) >= 12)
|
||||
return true;
|
||||
@ -547,9 +545,8 @@ int intel_dsc_compute_params(struct intel_encoder *encoder,
|
||||
}
|
||||
|
||||
enum intel_display_power_domain
|
||||
intel_dsc_power_domain(const struct intel_crtc_state *crtc_state)
|
||||
intel_dsc_power_domain(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
enum pipe pipe = crtc->pipe;
|
||||
|
||||
@ -566,7 +563,7 @@ intel_dsc_power_domain(const struct intel_crtc_state *crtc_state)
|
||||
*/
|
||||
if (DISPLAY_VER(i915) == 12 && !IS_ROCKETLAKE(i915) && pipe == PIPE_A)
|
||||
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
|
||||
else if (is_pipe_dsc(crtc_state))
|
||||
else if (is_pipe_dsc(crtc, cpu_transcoder))
|
||||
return POWER_DOMAIN_PIPE(pipe);
|
||||
else
|
||||
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
|
||||
@ -577,6 +574,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
enum pipe pipe = crtc->pipe;
|
||||
u32 pps_val = 0;
|
||||
u32 rc_buf_thresh_dword[4];
|
||||
@ -601,7 +599,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
if (vdsc_cfg->vbr_enable)
|
||||
pps_val |= DSC_VBR_ENABLE;
|
||||
drm_info(&dev_priv->drm, "PPS0 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_0,
|
||||
pps_val);
|
||||
/*
|
||||
@ -625,7 +623,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val = 0;
|
||||
pps_val |= DSC_BPP(vdsc_cfg->bits_per_pixel);
|
||||
drm_info(&dev_priv->drm, "PPS1 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_1,
|
||||
pps_val);
|
||||
/*
|
||||
@ -650,7 +648,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_PIC_HEIGHT(vdsc_cfg->pic_height) |
|
||||
DSC_PIC_WIDTH(vdsc_cfg->pic_width / num_vdsc_instances);
|
||||
drm_info(&dev_priv->drm, "PPS2 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_2,
|
||||
pps_val);
|
||||
/*
|
||||
@ -675,7 +673,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_SLICE_HEIGHT(vdsc_cfg->slice_height) |
|
||||
DSC_SLICE_WIDTH(vdsc_cfg->slice_width);
|
||||
drm_info(&dev_priv->drm, "PPS3 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_3,
|
||||
pps_val);
|
||||
/*
|
||||
@ -700,7 +698,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_INITIAL_XMIT_DELAY(vdsc_cfg->initial_xmit_delay) |
|
||||
DSC_INITIAL_DEC_DELAY(vdsc_cfg->initial_dec_delay);
|
||||
drm_info(&dev_priv->drm, "PPS4 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_4,
|
||||
pps_val);
|
||||
/*
|
||||
@ -725,7 +723,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_SCALE_INC_INT(vdsc_cfg->scale_increment_interval) |
|
||||
DSC_SCALE_DEC_INT(vdsc_cfg->scale_decrement_interval);
|
||||
drm_info(&dev_priv->drm, "PPS5 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_5,
|
||||
pps_val);
|
||||
/*
|
||||
@ -752,7 +750,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
DSC_FLATNESS_MIN_QP(vdsc_cfg->flatness_min_qp) |
|
||||
DSC_FLATNESS_MAX_QP(vdsc_cfg->flatness_max_qp);
|
||||
drm_info(&dev_priv->drm, "PPS6 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_6,
|
||||
pps_val);
|
||||
/*
|
||||
@ -777,7 +775,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_SLICE_BPG_OFFSET(vdsc_cfg->slice_bpg_offset) |
|
||||
DSC_NFL_BPG_OFFSET(vdsc_cfg->nfl_bpg_offset);
|
||||
drm_info(&dev_priv->drm, "PPS7 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_7,
|
||||
pps_val);
|
||||
/*
|
||||
@ -802,7 +800,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_FINAL_OFFSET(vdsc_cfg->final_offset) |
|
||||
DSC_INITIAL_OFFSET(vdsc_cfg->initial_offset);
|
||||
drm_info(&dev_priv->drm, "PPS8 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_8,
|
||||
pps_val);
|
||||
/*
|
||||
@ -827,7 +825,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
pps_val |= DSC_RC_MODEL_SIZE(vdsc_cfg->rc_model_size) |
|
||||
DSC_RC_EDGE_FACTOR(DSC_RC_EDGE_FACTOR_CONST);
|
||||
drm_info(&dev_priv->drm, "PPS9 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_9,
|
||||
pps_val);
|
||||
/*
|
||||
@ -854,7 +852,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
DSC_RC_TARGET_OFF_HIGH(DSC_RC_TGT_OFFSET_HI_CONST) |
|
||||
DSC_RC_TARGET_OFF_LOW(DSC_RC_TGT_OFFSET_LO_CONST);
|
||||
drm_info(&dev_priv->drm, "PPS10 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_10,
|
||||
pps_val);
|
||||
/*
|
||||
@ -882,7 +880,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
DSC_SLICE_ROW_PER_FRAME(vdsc_cfg->pic_height /
|
||||
vdsc_cfg->slice_height);
|
||||
drm_info(&dev_priv->drm, "PPS16 = 0x%08x\n", pps_val);
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_16,
|
||||
pps_val);
|
||||
/*
|
||||
@ -911,7 +909,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
drm_info(&dev_priv->drm, " RC_BUF_THRESH%d = 0x%08x\n", i,
|
||||
rc_buf_thresh_dword[i / 4]);
|
||||
}
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_0,
|
||||
rc_buf_thresh_dword[0]);
|
||||
intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_0_UDW,
|
||||
@ -968,7 +966,7 @@ static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state)
|
||||
drm_info(&dev_priv->drm, " RC_RANGE_PARAM_%d = 0x%08x\n", i,
|
||||
rc_range_params_dword[i / 2]);
|
||||
}
|
||||
if (!is_pipe_dsc(crtc_state)) {
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder)) {
|
||||
intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_0,
|
||||
rc_range_params_dword[0]);
|
||||
intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_0_UDW,
|
||||
@ -1095,18 +1093,16 @@ static void intel_dsc_dp_pps_write(struct intel_encoder *encoder,
|
||||
sizeof(dp_dsc_pps_sdp));
|
||||
}
|
||||
|
||||
static i915_reg_t dss_ctl1_reg(const struct intel_crtc_state *crtc_state)
|
||||
static i915_reg_t dss_ctl1_reg(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
|
||||
{
|
||||
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
|
||||
|
||||
return is_pipe_dsc(crtc_state) ? ICL_PIPE_DSS_CTL1(pipe) : DSS_CTL1;
|
||||
return is_pipe_dsc(crtc, cpu_transcoder) ?
|
||||
ICL_PIPE_DSS_CTL1(crtc->pipe) : DSS_CTL1;
|
||||
}
|
||||
|
||||
static i915_reg_t dss_ctl2_reg(const struct intel_crtc_state *crtc_state)
|
||||
static i915_reg_t dss_ctl2_reg(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
|
||||
{
|
||||
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
|
||||
|
||||
return is_pipe_dsc(crtc_state) ? ICL_PIPE_DSS_CTL2(pipe) : DSS_CTL2;
|
||||
return is_pipe_dsc(crtc, cpu_transcoder) ?
|
||||
ICL_PIPE_DSS_CTL2(crtc->pipe) : DSS_CTL2;
|
||||
}
|
||||
|
||||
static struct intel_crtc *
|
||||
@ -1142,7 +1138,7 @@ void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state)
|
||||
else
|
||||
dss_ctl1_val |= UNCOMPRESSED_JOINER_MASTER;
|
||||
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(crtc_state), dss_ctl1_val);
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(crtc, crtc_state->cpu_transcoder), dss_ctl1_val);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1176,8 +1172,8 @@ void intel_dsc_enable(struct intel_encoder *encoder,
|
||||
if (!crtc_state->bigjoiner_slave)
|
||||
dss_ctl1_val |= MASTER_BIG_JOINER_ENABLE;
|
||||
}
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(crtc_state), dss_ctl1_val);
|
||||
intel_de_write(dev_priv, dss_ctl2_reg(crtc_state), dss_ctl2_val);
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(crtc, crtc_state->cpu_transcoder), dss_ctl1_val);
|
||||
intel_de_write(dev_priv, dss_ctl2_reg(crtc, crtc_state->cpu_transcoder), dss_ctl2_val);
|
||||
}
|
||||
|
||||
void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
|
||||
@ -1188,8 +1184,8 @@ void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
|
||||
/* Disable only if either of them is enabled */
|
||||
if (old_crtc_state->dsc.compression_enable ||
|
||||
old_crtc_state->bigjoiner) {
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(old_crtc_state), 0);
|
||||
intel_de_write(dev_priv, dss_ctl2_reg(old_crtc_state), 0);
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(crtc, old_crtc_state->cpu_transcoder), 0);
|
||||
intel_de_write(dev_priv, dss_ctl2_reg(crtc, old_crtc_state->cpu_transcoder), 0);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1199,7 +1195,7 @@ void intel_uncompressed_joiner_get_config(struct intel_crtc_state *crtc_state)
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
u32 dss_ctl1;
|
||||
|
||||
dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg(crtc_state));
|
||||
dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg(crtc, crtc_state->cpu_transcoder));
|
||||
if (dss_ctl1 & UNCOMPRESSED_JOINER_MASTER) {
|
||||
crtc_state->bigjoiner = true;
|
||||
crtc_state->bigjoiner_linked_crtc = intel_dsc_get_bigjoiner_secondary(crtc);
|
||||
@ -1214,9 +1210,10 @@ void intel_uncompressed_joiner_get_config(struct intel_crtc_state *crtc_state)
|
||||
|
||||
void intel_dsc_get_config(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
enum pipe pipe = crtc->pipe;
|
||||
enum intel_display_power_domain power_domain;
|
||||
intel_wakeref_t wakeref;
|
||||
@ -1225,14 +1222,14 @@ void intel_dsc_get_config(struct intel_crtc_state *crtc_state)
|
||||
if (!intel_dsc_source_support(crtc_state))
|
||||
return;
|
||||
|
||||
power_domain = intel_dsc_power_domain(crtc_state);
|
||||
power_domain = intel_dsc_power_domain(crtc, cpu_transcoder);
|
||||
|
||||
wakeref = intel_display_power_get_if_enabled(dev_priv, power_domain);
|
||||
if (!wakeref)
|
||||
return;
|
||||
|
||||
dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg(crtc_state));
|
||||
dss_ctl2 = intel_de_read(dev_priv, dss_ctl2_reg(crtc_state));
|
||||
dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg(crtc, cpu_transcoder));
|
||||
dss_ctl2 = intel_de_read(dev_priv, dss_ctl2_reg(crtc, cpu_transcoder));
|
||||
|
||||
crtc_state->dsc.compression_enable = dss_ctl2 & LEFT_BRANCH_VDSC_ENABLE;
|
||||
if (!crtc_state->dsc.compression_enable)
|
||||
@ -1256,7 +1253,7 @@ void intel_dsc_get_config(struct intel_crtc_state *crtc_state)
|
||||
/* FIXME: add more state readout as needed */
|
||||
|
||||
/* PPS1 */
|
||||
if (!is_pipe_dsc(crtc_state))
|
||||
if (!is_pipe_dsc(crtc, cpu_transcoder))
|
||||
val = intel_de_read(dev_priv, DSCA_PICTURE_PARAMETER_SET_1);
|
||||
else
|
||||
val = intel_de_read(dev_priv,
|
||||
|
@ -8,8 +8,10 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct intel_encoder;
|
||||
enum transcoder;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_encoder;
|
||||
|
||||
bool intel_dsc_source_support(const struct intel_crtc_state *crtc_state);
|
||||
void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state);
|
||||
@ -21,7 +23,7 @@ int intel_dsc_compute_params(struct intel_encoder *encoder,
|
||||
void intel_uncompressed_joiner_get_config(struct intel_crtc_state *crtc_state);
|
||||
void intel_dsc_get_config(struct intel_crtc_state *crtc_state);
|
||||
enum intel_display_power_domain
|
||||
intel_dsc_power_domain(const struct intel_crtc_state *crtc_state);
|
||||
intel_dsc_power_domain(struct intel_crtc *crtc, enum transcoder cpu_transcoder);
|
||||
struct intel_crtc *intel_dsc_get_bigjoiner_secondary(const struct intel_crtc *primary_crtc);
|
||||
|
||||
#endif /* __INTEL_VDSC_H__ */
|
||||
|
@ -656,6 +656,7 @@ skl_disable_plane(struct intel_plane *plane,
|
||||
|
||||
skl_write_plane_wm(plane, crtc_state);
|
||||
|
||||
intel_psr2_disable_plane_sel_fetch(plane, crtc_state);
|
||||
intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0);
|
||||
intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0);
|
||||
|
||||
@ -993,6 +994,11 @@ static u32 skl_surf_address(const struct intel_plane_state *plane_state,
|
||||
u32 offset = plane_state->view.color_plane[color_plane].offset;
|
||||
|
||||
if (intel_fb_uses_dpt(fb)) {
|
||||
/*
|
||||
* The DPT object contains only one vma, so the VMA's offset
|
||||
* within the DPT is always 0.
|
||||
*/
|
||||
WARN_ON(plane_state->dpt_vma->node.start);
|
||||
WARN_ON(offset & 0x1fffff);
|
||||
return offset >> 9;
|
||||
} else {
|
||||
@ -1096,8 +1102,7 @@ skl_program_plane(struct intel_plane *plane,
|
||||
(plane_state->view.color_plane[1].y << 16) |
|
||||
plane_state->view.color_plane[1].x);
|
||||
|
||||
if (!drm_atomic_crtc_needs_modeset(&crtc_state->uapi))
|
||||
intel_psr2_program_plane_sel_fetch(plane, crtc_state, plane_state, color_plane);
|
||||
intel_psr2_program_plane_sel_fetch(plane, crtc_state, plane_state, color_plane);
|
||||
|
||||
/*
|
||||
* Enable the scaler before the plane so that we don't
|
||||
|
@ -32,6 +32,7 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_connector.h"
|
||||
#include "intel_crtc.h"
|
||||
#include "intel_de.h"
|
||||
@ -270,23 +271,19 @@ static int intel_dsi_compute_config(struct intel_encoder *encoder,
|
||||
struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi,
|
||||
base);
|
||||
struct intel_connector *intel_connector = intel_dsi->attached_connector;
|
||||
const struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
|
||||
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
|
||||
int ret;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "\n");
|
||||
pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB;
|
||||
|
||||
if (fixed_mode) {
|
||||
intel_fixed_panel_mode(fixed_mode, adjusted_mode);
|
||||
ret = intel_panel_compute_config(intel_connector, adjusted_mode);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (HAS_GMCH(dev_priv))
|
||||
ret = intel_gmch_panel_fitting(pipe_config, conn_state);
|
||||
else
|
||||
ret = intel_pch_panel_fitting(pipe_config, conn_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
ret = intel_panel_fitting(pipe_config, conn_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
return -EINVAL;
|
||||
@ -883,7 +880,7 @@ static void intel_dsi_pre_enable(struct intel_atomic_state *state,
|
||||
intel_dsi_port_enable(encoder, pipe_config);
|
||||
}
|
||||
|
||||
intel_panel_enable_backlight(pipe_config, conn_state);
|
||||
intel_backlight_enable(pipe_config, conn_state);
|
||||
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON);
|
||||
}
|
||||
|
||||
@ -913,7 +910,7 @@ static void intel_dsi_disable(struct intel_atomic_state *state,
|
||||
drm_dbg_kms(&i915->drm, "\n");
|
||||
|
||||
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_OFF);
|
||||
intel_panel_disable_backlight(old_conn_state);
|
||||
intel_backlight_disable(old_conn_state);
|
||||
|
||||
/*
|
||||
* According to the spec we should send SHUTDOWN before
|
||||
@ -1633,25 +1630,21 @@ static const struct drm_connector_funcs intel_dsi_connector_funcs = {
|
||||
static void vlv_dsi_add_properties(struct intel_connector *connector)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
|
||||
u32 allowed_scalers;
|
||||
|
||||
if (connector->panel.fixed_mode) {
|
||||
u32 allowed_scalers;
|
||||
allowed_scalers = BIT(DRM_MODE_SCALE_ASPECT) | BIT(DRM_MODE_SCALE_FULLSCREEN);
|
||||
if (!HAS_GMCH(dev_priv))
|
||||
allowed_scalers |= BIT(DRM_MODE_SCALE_CENTER);
|
||||
|
||||
allowed_scalers = BIT(DRM_MODE_SCALE_ASPECT) | BIT(DRM_MODE_SCALE_FULLSCREEN);
|
||||
if (!HAS_GMCH(dev_priv))
|
||||
allowed_scalers |= BIT(DRM_MODE_SCALE_CENTER);
|
||||
drm_connector_attach_scaling_mode_property(&connector->base,
|
||||
allowed_scalers);
|
||||
|
||||
drm_connector_attach_scaling_mode_property(&connector->base,
|
||||
allowed_scalers);
|
||||
connector->base.state->scaling_mode = DRM_MODE_SCALE_ASPECT;
|
||||
|
||||
connector->base.state->scaling_mode = DRM_MODE_SCALE_ASPECT;
|
||||
|
||||
drm_connector_set_panel_orientation_with_quirk(
|
||||
&connector->base,
|
||||
intel_dsi_get_panel_orientation(connector),
|
||||
connector->panel.fixed_mode->hdisplay,
|
||||
connector->panel.fixed_mode->vdisplay);
|
||||
}
|
||||
drm_connector_set_panel_orientation_with_quirk(&connector->base,
|
||||
intel_dsi_get_panel_orientation(connector),
|
||||
connector->panel.fixed_mode->hdisplay,
|
||||
connector->panel.fixed_mode->vdisplay);
|
||||
}
|
||||
|
||||
#define NS_KHZ_RATIO 1000000
|
||||
@ -1876,7 +1869,7 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
|
||||
intel_encoder->post_disable = intel_dsi_post_disable;
|
||||
intel_encoder->get_hw_state = intel_dsi_get_hw_state;
|
||||
intel_encoder->get_config = intel_dsi_get_config;
|
||||
intel_encoder->update_pipe = intel_panel_update_backlight;
|
||||
intel_encoder->update_pipe = intel_backlight_update;
|
||||
intel_encoder->shutdown = intel_dsi_shutdown;
|
||||
|
||||
intel_connector->get_hw_state = intel_connector_get_hw_state;
|
||||
@ -1964,7 +1957,7 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
|
||||
}
|
||||
|
||||
intel_panel_init(&intel_connector->panel, fixed_mode, NULL);
|
||||
intel_panel_setup_backlight(connector, INVALID_PIPE);
|
||||
intel_backlight_setup(intel_connector, INVALID_PIPE);
|
||||
|
||||
vlv_dsi_add_properties(intel_connector);
|
||||
|
||||
|
@ -568,3 +568,26 @@ void bxt_dsi_reset_clocks(struct intel_encoder *encoder, enum port port)
|
||||
}
|
||||
intel_de_write(dev_priv, MIPI_EOT_DISABLE(port), CLOCKSTOP);
|
||||
}
|
||||
|
||||
static void assert_dsi_pll(struct drm_i915_private *i915, bool state)
|
||||
{
|
||||
bool cur_state;
|
||||
|
||||
vlv_cck_get(i915);
|
||||
cur_state = vlv_cck_read(i915, CCK_REG_DSI_PLL_CONTROL) & DSI_PLL_VCO_EN;
|
||||
vlv_cck_put(i915);
|
||||
|
||||
I915_STATE_WARN(cur_state != state,
|
||||
"DSI PLL state assertion failure (expected %s, current %s)\n",
|
||||
onoff(state), onoff(cur_state));
|
||||
}
|
||||
|
||||
void assert_dsi_pll_enabled(struct drm_i915_private *i915)
|
||||
{
|
||||
assert_dsi_pll(i915, true);
|
||||
}
|
||||
|
||||
void assert_dsi_pll_disabled(struct drm_i915_private *i915)
|
||||
{
|
||||
assert_dsi_pll(i915, false);
|
||||
}
|
||||
|
@ -1373,13 +1373,28 @@ err_st_alloc:
|
||||
}
|
||||
|
||||
static struct scatterlist *
|
||||
remap_pages(struct drm_i915_gem_object *obj, unsigned int offset,
|
||||
remap_pages(struct drm_i915_gem_object *obj,
|
||||
unsigned int offset, unsigned int alignment_pad,
|
||||
unsigned int width, unsigned int height,
|
||||
unsigned int src_stride, unsigned int dst_stride,
|
||||
struct sg_table *st, struct scatterlist *sg)
|
||||
{
|
||||
unsigned int row;
|
||||
|
||||
if (alignment_pad) {
|
||||
st->nents++;
|
||||
|
||||
/*
|
||||
* The DE ignores the PTEs for the padding tiles, the sg entry
|
||||
* here is just a convenience to indicate how many padding PTEs
|
||||
* to insert at this spot.
|
||||
*/
|
||||
sg_set_page(sg, NULL, alignment_pad * 4096, 0);
|
||||
sg_dma_address(sg) = 0;
|
||||
sg_dma_len(sg) = alignment_pad * 4096;
|
||||
sg = sg_next(sg);
|
||||
}
|
||||
|
||||
for (row = 0; row < height; row++) {
|
||||
unsigned int left = width * I915_GTT_PAGE_SIZE;
|
||||
|
||||
@ -1439,6 +1454,7 @@ intel_remap_pages(struct intel_remapped_info *rem_info,
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct sg_table *st;
|
||||
struct scatterlist *sg;
|
||||
unsigned int gtt_offset = 0;
|
||||
int ret = -ENOMEM;
|
||||
int i;
|
||||
|
||||
@ -1455,10 +1471,19 @@ intel_remap_pages(struct intel_remapped_info *rem_info,
|
||||
sg = st->sgl;
|
||||
|
||||
for (i = 0 ; i < ARRAY_SIZE(rem_info->plane); i++) {
|
||||
sg = remap_pages(obj, rem_info->plane[i].offset,
|
||||
unsigned int alignment_pad = 0;
|
||||
|
||||
if (rem_info->plane_alignment)
|
||||
alignment_pad = ALIGN(gtt_offset, rem_info->plane_alignment) - gtt_offset;
|
||||
|
||||
sg = remap_pages(obj,
|
||||
rem_info->plane[i].offset, alignment_pad,
|
||||
rem_info->plane[i].width, rem_info->plane[i].height,
|
||||
rem_info->plane[i].src_stride, rem_info->plane[i].dst_stride,
|
||||
st, sg);
|
||||
|
||||
gtt_offset += alignment_pad +
|
||||
rem_info->plane[i].dst_stride * rem_info->plane[i].height;
|
||||
}
|
||||
|
||||
i915_sg_trim(st);
|
||||
|
@ -97,7 +97,7 @@ static int i915_get_bridge_dev(struct drm_i915_private *dev_priv)
|
||||
pci_get_domain_bus_and_slot(domain, 0, PCI_DEVFN(0, 0));
|
||||
if (!dev_priv->bridge_dev) {
|
||||
drm_err(&dev_priv->drm, "bridge device not found\n");
|
||||
return -1;
|
||||
return -EIO;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@ -409,8 +409,9 @@ static int i915_driver_mmio_probe(struct drm_i915_private *dev_priv)
|
||||
if (i915_inject_probe_failure(dev_priv))
|
||||
return -ENODEV;
|
||||
|
||||
if (i915_get_bridge_dev(dev_priv))
|
||||
return -EIO;
|
||||
ret = i915_get_bridge_dev(dev_priv);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = intel_uncore_init_mmio(&dev_priv->uncore);
|
||||
if (ret < 0)
|
||||
|
@ -323,15 +323,15 @@ struct intel_crtc;
|
||||
struct intel_limit;
|
||||
struct dpll;
|
||||
|
||||
struct drm_i915_display_funcs {
|
||||
void (*get_cdclk)(struct drm_i915_private *dev_priv,
|
||||
struct intel_cdclk_config *cdclk_config);
|
||||
void (*set_cdclk)(struct drm_i915_private *dev_priv,
|
||||
const struct intel_cdclk_config *cdclk_config,
|
||||
enum pipe pipe);
|
||||
int (*bw_calc_min_cdclk)(struct intel_atomic_state *state);
|
||||
int (*get_fifo_size)(struct drm_i915_private *dev_priv,
|
||||
enum i9xx_plane_id i9xx_plane);
|
||||
/* functions used internal in intel_pm.c */
|
||||
struct drm_i915_clock_gating_funcs {
|
||||
void (*init_clock_gating)(struct drm_i915_private *dev_priv);
|
||||
};
|
||||
|
||||
/* functions used for watermark calcs for display. */
|
||||
struct drm_i915_wm_disp_funcs {
|
||||
/* update_wm is for legacy wm management */
|
||||
void (*update_wm)(struct drm_i915_private *dev_priv);
|
||||
int (*compute_pipe_wm)(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
int (*compute_intermediate_wm)(struct intel_atomic_state *state,
|
||||
@ -343,39 +343,9 @@ struct drm_i915_display_funcs {
|
||||
void (*optimize_watermarks)(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
int (*compute_global_watermarks)(struct intel_atomic_state *state);
|
||||
void (*update_wm)(struct intel_crtc *crtc);
|
||||
int (*modeset_calc_cdclk)(struct intel_cdclk_state *state);
|
||||
u8 (*calc_voltage_level)(int cdclk);
|
||||
/* Returns the active state of the crtc, and if the crtc is active,
|
||||
* fills out the pipe-config with the hw state. */
|
||||
bool (*get_pipe_config)(struct intel_crtc *,
|
||||
struct intel_crtc_state *);
|
||||
void (*get_initial_plane_config)(struct intel_crtc *,
|
||||
struct intel_initial_plane_config *);
|
||||
int (*crtc_compute_clock)(struct intel_crtc *crtc,
|
||||
struct intel_crtc_state *crtc_state);
|
||||
void (*crtc_enable)(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void (*crtc_disable)(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void (*commit_modeset_enables)(struct intel_atomic_state *state);
|
||||
void (*commit_modeset_disables)(struct intel_atomic_state *state);
|
||||
void (*audio_codec_enable)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void (*audio_codec_disable)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *old_crtc_state,
|
||||
const struct drm_connector_state *old_conn_state);
|
||||
void (*fdi_link_train)(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
void (*init_clock_gating)(struct drm_i915_private *dev_priv);
|
||||
void (*hpd_irq_setup)(struct drm_i915_private *dev_priv);
|
||||
/* clock updates for mode set */
|
||||
/* cursor updates */
|
||||
/* render clock increase/decrease */
|
||||
/* display clock increase/decrease */
|
||||
/* pll clock increase/decrease */
|
||||
};
|
||||
|
||||
struct intel_color_funcs {
|
||||
int (*color_check)(struct intel_crtc_state *crtc_state);
|
||||
/*
|
||||
* Program double buffered color management registers during
|
||||
@ -394,6 +364,53 @@ struct drm_i915_display_funcs {
|
||||
void (*read_luts)(struct intel_crtc_state *crtc_state);
|
||||
};
|
||||
|
||||
struct intel_audio_funcs {
|
||||
void (*audio_codec_enable)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
void (*audio_codec_disable)(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *old_crtc_state,
|
||||
const struct drm_connector_state *old_conn_state);
|
||||
};
|
||||
|
||||
struct intel_cdclk_funcs {
|
||||
void (*get_cdclk)(struct drm_i915_private *dev_priv,
|
||||
struct intel_cdclk_config *cdclk_config);
|
||||
void (*set_cdclk)(struct drm_i915_private *dev_priv,
|
||||
const struct intel_cdclk_config *cdclk_config,
|
||||
enum pipe pipe);
|
||||
int (*bw_calc_min_cdclk)(struct intel_atomic_state *state);
|
||||
int (*modeset_calc_cdclk)(struct intel_cdclk_state *state);
|
||||
u8 (*calc_voltage_level)(int cdclk);
|
||||
};
|
||||
|
||||
struct intel_hotplug_funcs {
|
||||
void (*hpd_irq_setup)(struct drm_i915_private *dev_priv);
|
||||
};
|
||||
|
||||
struct intel_fdi_funcs {
|
||||
void (*fdi_link_train)(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
};
|
||||
|
||||
struct intel_dpll_funcs {
|
||||
int (*crtc_compute_clock)(struct intel_crtc_state *crtc_state);
|
||||
};
|
||||
|
||||
struct drm_i915_display_funcs {
|
||||
/* Returns the active state of the crtc, and if the crtc is active,
|
||||
* fills out the pipe-config with the hw state. */
|
||||
bool (*get_pipe_config)(struct intel_crtc *,
|
||||
struct intel_crtc_state *);
|
||||
void (*get_initial_plane_config)(struct intel_crtc *,
|
||||
struct intel_initial_plane_config *);
|
||||
void (*crtc_enable)(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void (*crtc_disable)(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void (*commit_modeset_enables)(struct intel_atomic_state *state);
|
||||
};
|
||||
|
||||
|
||||
#define I915_COLOR_UNEVICTABLE (-1) /* a non-vma sharing the address space */
|
||||
|
||||
@ -454,7 +471,6 @@ struct intel_fbc {
|
||||
} fb;
|
||||
|
||||
unsigned int fence_y_offset;
|
||||
u16 gen9_wa_cfb_stride;
|
||||
u16 interval;
|
||||
s8 fence_id;
|
||||
bool psr2_active;
|
||||
@ -479,9 +495,10 @@ struct intel_fbc {
|
||||
u64 modifier;
|
||||
} fb;
|
||||
|
||||
int cfb_size;
|
||||
unsigned int cfb_stride;
|
||||
unsigned int cfb_size;
|
||||
unsigned int fence_y_offset;
|
||||
u16 gen9_wa_cfb_stride;
|
||||
u16 override_cfb_stride;
|
||||
u16 interval;
|
||||
s8 fence_id;
|
||||
bool plane_visible;
|
||||
@ -636,22 +653,6 @@ i915_fence_timeout(const struct drm_i915_private *i915)
|
||||
/* Amount of PSF GV points, BSpec precisely defines this */
|
||||
#define I915_NUM_PSF_GV_POINTS 3
|
||||
|
||||
struct ddi_vbt_port_info {
|
||||
/* Non-NULL if port present. */
|
||||
struct intel_bios_encoder_data *devdata;
|
||||
|
||||
int max_tmds_clock;
|
||||
|
||||
/* This is an index in the HDMI/DVI DDI buffer translation table. */
|
||||
u8 hdmi_level_shift;
|
||||
u8 hdmi_level_shift_set:1;
|
||||
|
||||
u8 alternate_aux_channel;
|
||||
u8 alternate_ddc_pin;
|
||||
|
||||
int dp_max_link_rate; /* 0 for not limited by VBT */
|
||||
};
|
||||
|
||||
enum psr_lines_to_wait {
|
||||
PSR_0_LINES_TO_WAIT = 0,
|
||||
PSR_1_LINE_TO_WAIT,
|
||||
@ -706,6 +707,7 @@ struct intel_vbt_data {
|
||||
|
||||
struct {
|
||||
u16 pwm_freq_hz;
|
||||
u16 brightness_precision_bits;
|
||||
bool present;
|
||||
bool active_low_pwm;
|
||||
u8 min_brightness; /* min_brightness/255 of max */
|
||||
@ -732,7 +734,7 @@ struct intel_vbt_data {
|
||||
|
||||
struct list_head display_devices;
|
||||
|
||||
struct ddi_vbt_port_info ddi_port_info[I915_MAX_PORTS];
|
||||
struct intel_bios_encoder_data *ports[I915_MAX_PORTS]; /* Non-NULL if port present. */
|
||||
struct sdvo_device_mapping sdvo_mappings[2];
|
||||
};
|
||||
|
||||
@ -886,8 +888,6 @@ struct drm_i915_private {
|
||||
*/
|
||||
u32 gpio_mmio_base;
|
||||
|
||||
u32 hsw_psr_mmio_adjust;
|
||||
|
||||
/* MMIO base address for MIPI regs */
|
||||
u32 mipi_mmio_base;
|
||||
|
||||
@ -974,8 +974,32 @@ struct drm_i915_private {
|
||||
/* unbound hipri wq for page flips/plane updates */
|
||||
struct workqueue_struct *flip_wq;
|
||||
|
||||
/* pm private clock gating functions */
|
||||
const struct drm_i915_clock_gating_funcs *clock_gating_funcs;
|
||||
|
||||
/* pm display functions */
|
||||
const struct drm_i915_wm_disp_funcs *wm_disp;
|
||||
|
||||
/* irq display functions */
|
||||
const struct intel_hotplug_funcs *hotplug_funcs;
|
||||
|
||||
/* fdi display functions */
|
||||
const struct intel_fdi_funcs *fdi_funcs;
|
||||
|
||||
/* display pll funcs */
|
||||
const struct intel_dpll_funcs *dpll_funcs;
|
||||
|
||||
/* Display functions */
|
||||
struct drm_i915_display_funcs display;
|
||||
const struct drm_i915_display_funcs *display;
|
||||
|
||||
/* Display internal color functions */
|
||||
const struct intel_color_funcs *color_funcs;
|
||||
|
||||
/* Display internal audio functions */
|
||||
const struct intel_audio_funcs *audio_funcs;
|
||||
|
||||
/* Display CDCLK functions */
|
||||
const struct intel_cdclk_funcs *cdclk_funcs;
|
||||
|
||||
/* PCH chipset type */
|
||||
enum intel_pch pch_type;
|
||||
@ -1016,12 +1040,6 @@ struct drm_i915_private {
|
||||
|
||||
struct list_head global_obj_list;
|
||||
|
||||
/*
|
||||
* For reading active_pipes holding any crtc lock is
|
||||
* sufficient, for writing must hold all of them.
|
||||
*/
|
||||
u8 active_pipes;
|
||||
|
||||
struct i915_wa_list gt_wa_list;
|
||||
|
||||
struct i915_frontbuffer_tracking fb_tracking;
|
||||
@ -1665,6 +1683,7 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
|
||||
#define HAS_IPS(dev_priv) (IS_HSW_ULT(dev_priv) || IS_BROADWELL(dev_priv))
|
||||
|
||||
#define HAS_DP_MST(dev_priv) (INTEL_INFO(dev_priv)->display.has_dp_mst)
|
||||
#define HAS_DP20(dev_priv) (IS_DG2(dev_priv))
|
||||
|
||||
#define HAS_CDCLK_CRAWL(dev_priv) (INTEL_INFO(dev_priv)->display.has_cdclk_crawl)
|
||||
#define HAS_DDI(dev_priv) (INTEL_INFO(dev_priv)->display.has_ddi)
|
||||
@ -1721,6 +1740,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
|
||||
|
||||
#define HAS_VRR(i915) (GRAPHICS_VER(i915) >= 12)
|
||||
|
||||
#define HAS_ASYNC_FLIPS(i915) (DISPLAY_VER(i915) >= 5)
|
||||
|
||||
/* Only valid when HAS_DISPLAY() is true */
|
||||
#define INTEL_DISPLAY_ENABLED(dev_priv) \
|
||||
(drm_WARN_ON(&(dev_priv)->drm, !HAS_DISPLAY(dev_priv)), !(dev_priv)->params.disable_display)
|
||||
|
@ -359,9 +359,8 @@ void i915_hotplug_interrupt_update(struct drm_i915_private *dev_priv,
|
||||
* @interrupt_mask: mask of interrupt bits to update
|
||||
* @enabled_irq_mask: mask of interrupt bits to enable
|
||||
*/
|
||||
void ilk_update_display_irq(struct drm_i915_private *dev_priv,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask)
|
||||
static void ilk_update_display_irq(struct drm_i915_private *dev_priv,
|
||||
u32 interrupt_mask, u32 enabled_irq_mask)
|
||||
{
|
||||
u32 new_val;
|
||||
|
||||
@ -380,6 +379,16 @@ void ilk_update_display_irq(struct drm_i915_private *dev_priv,
|
||||
}
|
||||
}
|
||||
|
||||
void ilk_enable_display_irq(struct drm_i915_private *i915, u32 bits)
|
||||
{
|
||||
ilk_update_display_irq(i915, bits, bits);
|
||||
}
|
||||
|
||||
void ilk_disable_display_irq(struct drm_i915_private *i915, u32 bits)
|
||||
{
|
||||
ilk_update_display_irq(i915, bits, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* bdw_update_port_irq - update DE port interrupt
|
||||
* @dev_priv: driver private
|
||||
@ -419,10 +428,9 @@ static void bdw_update_port_irq(struct drm_i915_private *dev_priv,
|
||||
* @interrupt_mask: mask of interrupt bits to update
|
||||
* @enabled_irq_mask: mask of interrupt bits to enable
|
||||
*/
|
||||
void bdw_update_pipe_irq(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask)
|
||||
static void bdw_update_pipe_irq(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, u32 interrupt_mask,
|
||||
u32 enabled_irq_mask)
|
||||
{
|
||||
u32 new_val;
|
||||
|
||||
@ -444,15 +452,27 @@ void bdw_update_pipe_irq(struct drm_i915_private *dev_priv,
|
||||
}
|
||||
}
|
||||
|
||||
void bdw_enable_pipe_irq(struct drm_i915_private *i915,
|
||||
enum pipe pipe, u32 bits)
|
||||
{
|
||||
bdw_update_pipe_irq(i915, pipe, bits, bits);
|
||||
}
|
||||
|
||||
void bdw_disable_pipe_irq(struct drm_i915_private *i915,
|
||||
enum pipe pipe, u32 bits)
|
||||
{
|
||||
bdw_update_pipe_irq(i915, pipe, bits, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* ibx_display_interrupt_update - update SDEIMR
|
||||
* @dev_priv: driver private
|
||||
* @interrupt_mask: mask of interrupt bits to update
|
||||
* @enabled_irq_mask: mask of interrupt bits to enable
|
||||
*/
|
||||
void ibx_display_interrupt_update(struct drm_i915_private *dev_priv,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask)
|
||||
static void ibx_display_interrupt_update(struct drm_i915_private *dev_priv,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask)
|
||||
{
|
||||
u32 sdeimr = intel_uncore_read(&dev_priv->uncore, SDEIMR);
|
||||
sdeimr &= ~interrupt_mask;
|
||||
@ -469,6 +489,16 @@ void ibx_display_interrupt_update(struct drm_i915_private *dev_priv,
|
||||
intel_uncore_posting_read(&dev_priv->uncore, SDEIMR);
|
||||
}
|
||||
|
||||
void ibx_enable_display_interrupt(struct drm_i915_private *i915, u32 bits)
|
||||
{
|
||||
ibx_display_interrupt_update(i915, bits, bits);
|
||||
}
|
||||
|
||||
void ibx_disable_display_interrupt(struct drm_i915_private *i915, u32 bits)
|
||||
{
|
||||
ibx_display_interrupt_update(i915, bits, 0);
|
||||
}
|
||||
|
||||
u32 i915_pipestat_enable_mask(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe)
|
||||
{
|
||||
@ -2093,22 +2123,6 @@ static void ivb_display_irq_handler(struct drm_i915_private *dev_priv,
|
||||
if (de_iir & DE_ERR_INT_IVB)
|
||||
ivb_err_int_handler(dev_priv);
|
||||
|
||||
if (de_iir & DE_EDP_PSR_INT_HSW) {
|
||||
struct intel_encoder *encoder;
|
||||
|
||||
for_each_intel_encoder_with_psr(&dev_priv->drm, encoder) {
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
|
||||
u32 psr_iir = intel_uncore_read(&dev_priv->uncore,
|
||||
EDP_PSR_IIR);
|
||||
|
||||
intel_psr_irq_handler(intel_dp, psr_iir);
|
||||
intel_uncore_write(&dev_priv->uncore,
|
||||
EDP_PSR_IIR, psr_iir);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (de_iir & DE_AUX_CHANNEL_A_IVB)
|
||||
dp_aux_irq_handler(dev_priv);
|
||||
|
||||
@ -4331,6 +4345,20 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define HPD_FUNCS(platform) \
|
||||
static const struct intel_hotplug_funcs platform##_hpd_funcs = { \
|
||||
.hpd_irq_setup = platform##_hpd_irq_setup, \
|
||||
}
|
||||
|
||||
HPD_FUNCS(i915);
|
||||
HPD_FUNCS(dg1);
|
||||
HPD_FUNCS(gen11);
|
||||
HPD_FUNCS(bxt);
|
||||
HPD_FUNCS(icp);
|
||||
HPD_FUNCS(spt);
|
||||
HPD_FUNCS(ilk);
|
||||
#undef HPD_FUNCS
|
||||
|
||||
/**
|
||||
* intel_irq_init - initializes irq support
|
||||
* @dev_priv: i915 device instance
|
||||
@ -4381,20 +4409,20 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
|
||||
|
||||
if (HAS_GMCH(dev_priv)) {
|
||||
if (I915_HAS_HOTPLUG(dev_priv))
|
||||
dev_priv->display.hpd_irq_setup = i915_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &i915_hpd_funcs;
|
||||
} else {
|
||||
if (HAS_PCH_DG1(dev_priv))
|
||||
dev_priv->display.hpd_irq_setup = dg1_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &dg1_hpd_funcs;
|
||||
else if (DISPLAY_VER(dev_priv) >= 11)
|
||||
dev_priv->display.hpd_irq_setup = gen11_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &gen11_hpd_funcs;
|
||||
else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv))
|
||||
dev_priv->display.hpd_irq_setup = bxt_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &bxt_hpd_funcs;
|
||||
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
|
||||
dev_priv->display.hpd_irq_setup = icp_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &icp_hpd_funcs;
|
||||
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_SPT)
|
||||
dev_priv->display.hpd_irq_setup = spt_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &spt_hpd_funcs;
|
||||
else
|
||||
dev_priv->display.hpd_irq_setup = ilk_hpd_irq_setup;
|
||||
dev_priv->hotplug_funcs = &ilk_hpd_funcs;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -9,9 +9,9 @@
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "display/intel_display.h"
|
||||
#include "i915_reg.h"
|
||||
|
||||
enum pipe;
|
||||
struct drm_crtc;
|
||||
struct drm_device;
|
||||
struct drm_display_mode;
|
||||
@ -40,46 +40,15 @@ void valleyview_disable_display_irqs(struct drm_i915_private *dev_priv);
|
||||
void i915_hotplug_interrupt_update(struct drm_i915_private *dev_priv,
|
||||
u32 mask,
|
||||
u32 bits);
|
||||
void ilk_update_display_irq(struct drm_i915_private *dev_priv,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask);
|
||||
static inline void
|
||||
ilk_enable_display_irq(struct drm_i915_private *dev_priv, u32 bits)
|
||||
{
|
||||
ilk_update_display_irq(dev_priv, bits, bits);
|
||||
}
|
||||
static inline void
|
||||
ilk_disable_display_irq(struct drm_i915_private *dev_priv, u32 bits)
|
||||
{
|
||||
ilk_update_display_irq(dev_priv, bits, 0);
|
||||
}
|
||||
void bdw_update_pipe_irq(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask);
|
||||
static inline void bdw_enable_pipe_irq(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, u32 bits)
|
||||
{
|
||||
bdw_update_pipe_irq(dev_priv, pipe, bits, bits);
|
||||
}
|
||||
static inline void bdw_disable_pipe_irq(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, u32 bits)
|
||||
{
|
||||
bdw_update_pipe_irq(dev_priv, pipe, bits, 0);
|
||||
}
|
||||
void ibx_display_interrupt_update(struct drm_i915_private *dev_priv,
|
||||
u32 interrupt_mask,
|
||||
u32 enabled_irq_mask);
|
||||
static inline void
|
||||
ibx_enable_display_interrupt(struct drm_i915_private *dev_priv, u32 bits)
|
||||
{
|
||||
ibx_display_interrupt_update(dev_priv, bits, bits);
|
||||
}
|
||||
static inline void
|
||||
ibx_disable_display_interrupt(struct drm_i915_private *dev_priv, u32 bits)
|
||||
{
|
||||
ibx_display_interrupt_update(dev_priv, bits, 0);
|
||||
}
|
||||
|
||||
void ilk_enable_display_irq(struct drm_i915_private *i915, u32 bits);
|
||||
void ilk_disable_display_irq(struct drm_i915_private *i915, u32 bits);
|
||||
|
||||
void bdw_enable_pipe_irq(struct drm_i915_private *i915, enum pipe pipe, u32 bits);
|
||||
void bdw_disable_pipe_irq(struct drm_i915_private *i915, enum pipe pipe, u32 bits);
|
||||
|
||||
void ibx_enable_display_interrupt(struct drm_i915_private *i915, u32 bits);
|
||||
void ibx_disable_display_interrupt(struct drm_i915_private *i915, u32 bits);
|
||||
|
||||
void gen5_enable_gt_irq(struct drm_i915_private *dev_priv, u32 mask);
|
||||
void gen5_disable_gt_irq(struct drm_i915_private *dev_priv, u32 mask);
|
||||
|
@ -55,7 +55,7 @@ struct drm_printer;
|
||||
param(int, enable_fbc, -1, 0600) \
|
||||
param(int, enable_psr, -1, 0600) \
|
||||
param(bool, psr_safest_params, false, 0400) \
|
||||
param(bool, enable_psr2_sel_fetch, false, 0400) \
|
||||
param(bool, enable_psr2_sel_fetch, true, 0400) \
|
||||
param(int, disable_power_well, -1, 0400) \
|
||||
param(int, enable_ips, 1, 0600) \
|
||||
param(int, invert_brightness, 0, 0600) \
|
||||
|
@ -537,8 +537,6 @@ static const struct intel_device_info vlv_info = {
|
||||
BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP), \
|
||||
.display.has_ddi = 1, \
|
||||
.display.has_fpga_dbg = 1, \
|
||||
.display.has_psr = 1, \
|
||||
.display.has_psr_hw_tracking = 1, \
|
||||
.display.has_dp_mst = 1, \
|
||||
.has_rc6p = 0 /* RC6p removed-by HSW */, \
|
||||
HSW_PIPE_OFFSETS, \
|
||||
@ -642,6 +640,8 @@ static const struct intel_device_info chv_info = {
|
||||
.has_gt_uc = 1, \
|
||||
.display.has_hdcp = 1, \
|
||||
.display.has_ipc = 1, \
|
||||
.display.has_psr = 1, \
|
||||
.display.has_psr_hw_tracking = 1, \
|
||||
.dbuf.size = 896 - 4, /* 4 blocks for bypass path allocation */ \
|
||||
.dbuf.slice_mask = BIT(DBUF_S1)
|
||||
|
||||
|
@ -2237,10 +2237,14 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
|
||||
|
||||
#define SNPS_PHY_MPLLB_DIV(phy) _MMIO_SNPS(phy, 0x168004)
|
||||
#define SNPS_PHY_MPLLB_FORCE_EN REG_BIT(31)
|
||||
#define SNPS_PHY_MPLLB_DIV_CLK_EN REG_BIT(30)
|
||||
#define SNPS_PHY_MPLLB_DIV5_CLK_EN REG_BIT(29)
|
||||
#define SNPS_PHY_MPLLB_V2I REG_GENMASK(27, 26)
|
||||
#define SNPS_PHY_MPLLB_FREQ_VCO REG_GENMASK(25, 24)
|
||||
#define SNPS_PHY_MPLLB_DIV_MULTIPLIER REG_GENMASK(23, 16)
|
||||
#define SNPS_PHY_MPLLB_PMIX_EN REG_BIT(10)
|
||||
#define SNPS_PHY_MPLLB_DP2_MODE REG_BIT(9)
|
||||
#define SNPS_PHY_MPLLB_WORD_DIV2_EN REG_BIT(8)
|
||||
#define SNPS_PHY_MPLLB_TX_CLK_DIV REG_GENMASK(7, 5)
|
||||
|
||||
#define SNPS_PHY_MPLLB_FRACN1(phy) _MMIO_SNPS(phy, 0x168008)
|
||||
@ -3356,6 +3360,10 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
|
||||
#define ILK_DPFC_DISABLE_DUMMY0 (1 << 8)
|
||||
#define ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL (1 << 14)
|
||||
#define ILK_DPFC_NUKE_ON_ANY_MODIFICATION (1 << 23)
|
||||
#define GLK_FBC_STRIDE _MMIO(0x43228)
|
||||
#define FBC_STRIDE_OVERRIDE REG_BIT(15)
|
||||
#define FBC_STRIDE_MASK REG_GENMASK(14, 0)
|
||||
#define FBC_STRIDE(x) REG_FIELD_PREP(FBC_STRIDE_MASK, (x))
|
||||
#define ILK_FBC_RT_BASE _MMIO(0x2128)
|
||||
#define ILK_FBC_RT_VALID (1 << 0)
|
||||
#define SNB_FBC_FRONT_BUFFER (1 << 1)
|
||||
@ -4231,6 +4239,7 @@ enum {
|
||||
#define DUPS1_GATING_DIS (1 << 15)
|
||||
#define DUPS2_GATING_DIS (1 << 19)
|
||||
#define DUPS3_GATING_DIS (1 << 23)
|
||||
#define CURSOR_GATING_DIS REG_BIT(28)
|
||||
#define DPF_GATING_DIS (1 << 10)
|
||||
#define DPF_RAM_GATING_DIS (1 << 9)
|
||||
#define DPFR_GATING_DIS (1 << 8)
|
||||
@ -4509,11 +4518,9 @@ enum {
|
||||
* HSW PSR registers are relative to DDIA(_DDI_BUF_CTL_A + 0x800) with just one
|
||||
* instance of it
|
||||
*/
|
||||
#define _HSW_EDP_PSR_BASE 0x64800
|
||||
#define _SRD_CTL_A 0x60800
|
||||
#define _SRD_CTL_EDP 0x6f800
|
||||
#define _PSR_ADJ(tran, reg) (_TRANS2(tran, reg) - dev_priv->hsw_psr_mmio_adjust)
|
||||
#define EDP_PSR_CTL(tran) _MMIO(_PSR_ADJ(tran, _SRD_CTL_A))
|
||||
#define EDP_PSR_CTL(tran) _MMIO(_TRANS2(tran, _SRD_CTL_A))
|
||||
#define EDP_PSR_ENABLE (1 << 31)
|
||||
#define BDW_PSR_SINGLE_FRAME (1 << 30)
|
||||
#define EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK (1 << 29) /* SW can't modify */
|
||||
@ -4557,22 +4564,13 @@ enum {
|
||||
#define EDP_PSR_POST_EXIT(trans) (0x2 << _EDP_PSR_TRANS_SHIFT(trans))
|
||||
#define EDP_PSR_PRE_ENTRY(trans) (0x1 << _EDP_PSR_TRANS_SHIFT(trans))
|
||||
|
||||
#define _SRD_AUX_CTL_A 0x60810
|
||||
#define _SRD_AUX_CTL_EDP 0x6f810
|
||||
#define EDP_PSR_AUX_CTL(tran) _MMIO(_PSR_ADJ(tran, _SRD_AUX_CTL_A))
|
||||
#define EDP_PSR_AUX_CTL_TIME_OUT_MASK (3 << 26)
|
||||
#define EDP_PSR_AUX_CTL_MESSAGE_SIZE_MASK (0x1f << 20)
|
||||
#define EDP_PSR_AUX_CTL_PRECHARGE_2US_MASK (0xf << 16)
|
||||
#define EDP_PSR_AUX_CTL_ERROR_INTERRUPT (1 << 11)
|
||||
#define EDP_PSR_AUX_CTL_BIT_CLOCK_2X_MASK (0x7ff)
|
||||
|
||||
#define _SRD_AUX_DATA_A 0x60814
|
||||
#define _SRD_AUX_DATA_EDP 0x6f814
|
||||
#define EDP_PSR_AUX_DATA(tran, i) _MMIO(_PSR_ADJ(tran, _SRD_AUX_DATA_A) + (i) + 4) /* 5 registers */
|
||||
#define EDP_PSR_AUX_DATA(tran, i) _MMIO(_TRANS2(tran, _SRD_AUX_DATA_A) + (i) + 4) /* 5 registers */
|
||||
|
||||
#define _SRD_STATUS_A 0x60840
|
||||
#define _SRD_STATUS_EDP 0x6f840
|
||||
#define EDP_PSR_STATUS(tran) _MMIO(_PSR_ADJ(tran, _SRD_STATUS_A))
|
||||
#define EDP_PSR_STATUS(tran) _MMIO(_TRANS2(tran, _SRD_STATUS_A))
|
||||
#define EDP_PSR_STATUS_STATE_MASK (7 << 29)
|
||||
#define EDP_PSR_STATUS_STATE_SHIFT 29
|
||||
#define EDP_PSR_STATUS_STATE_IDLE (0 << 29)
|
||||
@ -4599,13 +4597,13 @@ enum {
|
||||
|
||||
#define _SRD_PERF_CNT_A 0x60844
|
||||
#define _SRD_PERF_CNT_EDP 0x6f844
|
||||
#define EDP_PSR_PERF_CNT(tran) _MMIO(_PSR_ADJ(tran, _SRD_PERF_CNT_A))
|
||||
#define EDP_PSR_PERF_CNT(tran) _MMIO(_TRANS2(tran, _SRD_PERF_CNT_A))
|
||||
#define EDP_PSR_PERF_CNT_MASK 0xffffff
|
||||
|
||||
/* PSR_MASK on SKL+ */
|
||||
#define _SRD_DEBUG_A 0x60860
|
||||
#define _SRD_DEBUG_EDP 0x6f860
|
||||
#define EDP_PSR_DEBUG(tran) _MMIO(_PSR_ADJ(tran, _SRD_DEBUG_A))
|
||||
#define EDP_PSR_DEBUG(tran) _MMIO(_TRANS2(tran, _SRD_DEBUG_A))
|
||||
#define EDP_PSR_DEBUG_MASK_MAX_SLEEP (1 << 28)
|
||||
#define EDP_PSR_DEBUG_MASK_LPSP (1 << 27)
|
||||
#define EDP_PSR_DEBUG_MASK_MEMUP (1 << 26)
|
||||
@ -8176,8 +8174,9 @@ enum {
|
||||
#define GLK_CL0_PWR_DOWN (1 << 10)
|
||||
|
||||
#define CHICKEN_MISC_4 _MMIO(0x4208c)
|
||||
#define FBC_STRIDE_OVERRIDE (1 << 13)
|
||||
#define FBC_STRIDE_MASK 0x1FFF
|
||||
#define CHICKEN_FBC_STRIDE_OVERRIDE REG_BIT(13)
|
||||
#define CHICKEN_FBC_STRIDE_MASK REG_GENMASK(12, 0)
|
||||
#define CHICKEN_FBC_STRIDE(x) REG_FIELD_PREP(CHICKEN_FBC_STRIDE_MASK, (x))
|
||||
|
||||
#define _CHICKEN_PIPESL_1_A 0x420b0
|
||||
#define _CHICKEN_PIPESL_1_B 0x420b4
|
||||
@ -8211,6 +8210,7 @@ enum {
|
||||
#define VSC_DATA_SEL_SOFTWARE_CONTROL REG_BIT(25) /* GLK */
|
||||
#define FECSTALL_DIS_DPTSTREAM_DPTTG REG_BIT(23)
|
||||
#define DDI_TRAINING_OVERRIDE_ENABLE REG_BIT(19)
|
||||
#define ADLP_1_BASED_X_GRANULARITY REG_BIT(18)
|
||||
#define DDI_TRAINING_OVERRIDE_VALUE REG_BIT(18)
|
||||
#define DDIE_TRAINING_OVERRIDE_ENABLE REG_BIT(17) /* CHICKEN_TRANS_A only */
|
||||
#define DDIE_TRAINING_OVERRIDE_VALUE REG_BIT(16) /* CHICKEN_TRANS_A only */
|
||||
@ -9096,6 +9096,29 @@ enum {
|
||||
#define TRANS_DP_HSYNC_ACTIVE_LOW 0
|
||||
#define TRANS_DP_SYNC_MASK (3 << 3)
|
||||
|
||||
#define _TRANS_DP2_CTL_A 0x600a0
|
||||
#define _TRANS_DP2_CTL_B 0x610a0
|
||||
#define _TRANS_DP2_CTL_C 0x620a0
|
||||
#define _TRANS_DP2_CTL_D 0x630a0
|
||||
#define TRANS_DP2_CTL(trans) _MMIO_TRANS(trans, _TRANS_DP2_CTL_A, _TRANS_DP2_CTL_B)
|
||||
#define TRANS_DP2_128B132B_CHANNEL_CODING REG_BIT(31)
|
||||
#define TRANS_DP2_PANEL_REPLAY_ENABLE REG_BIT(30)
|
||||
#define TRANS_DP2_DEBUG_ENABLE REG_BIT(23)
|
||||
|
||||
#define _TRANS_DP2_VFREQHIGH_A 0x600a4
|
||||
#define _TRANS_DP2_VFREQHIGH_B 0x610a4
|
||||
#define _TRANS_DP2_VFREQHIGH_C 0x620a4
|
||||
#define _TRANS_DP2_VFREQHIGH_D 0x630a4
|
||||
#define TRANS_DP2_VFREQHIGH(trans) _MMIO_TRANS(trans, _TRANS_DP2_VFREQHIGH_A, _TRANS_DP2_VFREQHIGH_B)
|
||||
#define TRANS_DP2_VFREQ_PIXEL_CLOCK_MASK REG_GENMASK(31, 8)
|
||||
#define TRANS_DP2_VFREQ_PIXEL_CLOCK(clk_hz) REG_FIELD_PREP(TRANS_DP2_VFREQ_PIXEL_CLOCK_MASK, (clk_hz))
|
||||
|
||||
#define _TRANS_DP2_VFREQLOW_A 0x600a8
|
||||
#define _TRANS_DP2_VFREQLOW_B 0x610a8
|
||||
#define _TRANS_DP2_VFREQLOW_C 0x620a8
|
||||
#define _TRANS_DP2_VFREQLOW_D 0x630a8
|
||||
#define TRANS_DP2_VFREQLOW(trans) _MMIO_TRANS(trans, _TRANS_DP2_VFREQLOW_A, _TRANS_DP2_VFREQLOW_B)
|
||||
|
||||
/* SNB eDP training params */
|
||||
/* SNB A-stepping */
|
||||
#define EDP_LINK_TRAIN_400MV_0DB_SNB_A (0x38 << 22)
|
||||
@ -9710,6 +9733,11 @@ enum {
|
||||
#define AUDIO_CP_READY(trans) ((1 << 1) << ((trans) * 4))
|
||||
#define AUDIO_ELD_VALID(trans) ((1 << 0) << ((trans) * 4))
|
||||
|
||||
#define _AUD_TCA_DP_2DOT0_CTRL 0x650bc
|
||||
#define _AUD_TCB_DP_2DOT0_CTRL 0x651bc
|
||||
#define AUD_DP_2DOT0_CTRL(trans) _MMIO_TRANS(trans, _AUD_TCA_DP_2DOT0_CTRL, _AUD_TCB_DP_2DOT0_CTRL)
|
||||
#define AUD_ENABLE_SDP_SPLIT REG_BIT(31)
|
||||
|
||||
#define HSW_AUD_CHICKENBIT _MMIO(0x65f10)
|
||||
#define SKL_AUD_CODEC_WAKE_SIGNAL (1 << 15)
|
||||
|
||||
@ -10155,7 +10183,7 @@ enum skl_power_gate {
|
||||
#define TRANS_DDI_MODE_SELECT_DVI (1 << 24)
|
||||
#define TRANS_DDI_MODE_SELECT_DP_SST (2 << 24)
|
||||
#define TRANS_DDI_MODE_SELECT_DP_MST (3 << 24)
|
||||
#define TRANS_DDI_MODE_SELECT_FDI (4 << 24)
|
||||
#define TRANS_DDI_MODE_SELECT_FDI_OR_128B132B (4 << 24)
|
||||
#define TRANS_DDI_BPC_MASK (7 << 20)
|
||||
#define TRANS_DDI_BPC_8 (0 << 20)
|
||||
#define TRANS_DDI_BPC_10 (1 << 20)
|
||||
@ -11611,6 +11639,14 @@ enum skl_power_gate {
|
||||
_ICL_DSI_IO_MODECTL_1)
|
||||
#define COMBO_PHY_MODE_DSI (1 << 0)
|
||||
|
||||
/* TGL DSI Chicken register */
|
||||
#define _TGL_DSI_CHKN_REG_0 0x6B0C0
|
||||
#define _TGL_DSI_CHKN_REG_1 0x6B8C0
|
||||
#define TGL_DSI_CHKN_REG(port) _MMIO_PORT(port, \
|
||||
_TGL_DSI_CHKN_REG_0, \
|
||||
_TGL_DSI_CHKN_REG_1)
|
||||
#define TGL_DSI_CHKN_LSHS_GB REG_GENMASK(15, 12)
|
||||
|
||||
/* Display Stream Splitter Control */
|
||||
#define DSS_CTL1 _MMIO(0x67400)
|
||||
#define SPLITTER_ENABLE (1 << 31)
|
||||
@ -12734,4 +12770,7 @@ enum skl_power_gate {
|
||||
#define CLKREQ_POLICY _MMIO(0x101038)
|
||||
#define CLKREQ_POLICY_MEM_UP_OVRD REG_BIT(1)
|
||||
|
||||
#define CLKGATE_DIS_MISC _MMIO(0x46534)
|
||||
#define CLKGATE_DIS_MISC_DMASC_GATING_DIS REG_BIT(21)
|
||||
|
||||
#endif /* _I915_REG_H_ */
|
||||
|
@ -105,8 +105,9 @@ struct intel_remapped_plane_info {
|
||||
} __packed;
|
||||
|
||||
struct intel_remapped_info {
|
||||
struct intel_remapped_plane_info plane[2];
|
||||
u32 unused_mbz;
|
||||
struct intel_remapped_plane_info plane[4];
|
||||
/* in gtt pages */
|
||||
u32 plane_alignment;
|
||||
} __packed;
|
||||
|
||||
struct intel_rotation_info {
|
||||
@ -129,7 +130,7 @@ static inline void assert_i915_gem_gtt_types(void)
|
||||
{
|
||||
BUILD_BUG_ON(sizeof(struct intel_rotation_info) != 2 * sizeof(u32) + 8 * sizeof(u16));
|
||||
BUILD_BUG_ON(sizeof(struct intel_partial_info) != sizeof(u64) + sizeof(unsigned int));
|
||||
BUILD_BUG_ON(sizeof(struct intel_remapped_info) != 3 * sizeof(u32) + 8 * sizeof(u16));
|
||||
BUILD_BUG_ON(sizeof(struct intel_remapped_info) != 5 * sizeof(u32) + 16 * sizeof(u16));
|
||||
|
||||
/* Check that rotation/remapped shares offsets for simplicity */
|
||||
BUILD_BUG_ON(offsetof(struct intel_remapped_info, plane[0]) !=
|
||||
|
@ -444,7 +444,7 @@ static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv)
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(val & 0xf);
|
||||
return -1;
|
||||
return -EINVAL;
|
||||
}
|
||||
} else {
|
||||
switch (val & 0xf) {
|
||||
@ -462,7 +462,7 @@ static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv)
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(val & 0xf);
|
||||
return -1;
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -881,9 +881,8 @@ static struct intel_crtc *single_enabled_crtc(struct drm_i915_private *dev_priv)
|
||||
return enabled;
|
||||
}
|
||||
|
||||
static void pnv_update_wm(struct intel_crtc *unused_crtc)
|
||||
static void pnv_update_wm(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(unused_crtc->base.dev);
|
||||
struct intel_crtc *crtc;
|
||||
const struct cxsr_latency *latency;
|
||||
u32 reg;
|
||||
@ -1152,17 +1151,13 @@ static u16 g4x_compute_wm(const struct intel_crtc_state *crtc_state,
|
||||
cpp = plane_state->hw.fb->format->cpp[0];
|
||||
|
||||
/*
|
||||
* Not 100% sure which way ELK should go here as the
|
||||
* spec only says CL/CTG should assume 32bpp and BW
|
||||
* doesn't need to. But as these things followed the
|
||||
* mobile vs. desktop lines on gen3 as well, let's
|
||||
* assume ELK doesn't need this.
|
||||
* WaUse32BppForSRWM:ctg,elk
|
||||
*
|
||||
* The spec also fails to list such a restriction for
|
||||
* the HPLL watermark, which seems a little strange.
|
||||
* The spec fails to list this restriction for the
|
||||
* HPLL watermark, which seems a little strange.
|
||||
* Let's use 32bpp for the HPLL watermark as well.
|
||||
*/
|
||||
if (IS_GM45(dev_priv) && plane->id == PLANE_PRIMARY &&
|
||||
if (plane->id == PLANE_PRIMARY &&
|
||||
level != G4X_WM_LEVEL_NORMAL)
|
||||
cpp = max(cpp, 4u);
|
||||
|
||||
@ -1376,8 +1371,7 @@ static int g4x_compute_pipe_wm(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
struct g4x_wm_state *wm_state = &crtc_state->wm.g4x.optimal;
|
||||
int num_active_planes = hweight8(crtc_state->active_planes &
|
||||
~BIT(PLANE_CURSOR));
|
||||
u8 active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
|
||||
const struct g4x_pipe_wm *raw;
|
||||
const struct intel_plane_state *old_plane_state;
|
||||
const struct intel_plane_state *new_plane_state;
|
||||
@ -1417,7 +1411,7 @@ static int g4x_compute_pipe_wm(struct intel_atomic_state *state,
|
||||
wm_state->sr.cursor = raw->plane[PLANE_CURSOR];
|
||||
wm_state->sr.fbc = raw->fbc;
|
||||
|
||||
wm_state->cxsr = num_active_planes == BIT(PLANE_PRIMARY);
|
||||
wm_state->cxsr = active_planes == BIT(PLANE_PRIMARY);
|
||||
|
||||
level = G4X_WM_LEVEL_HPLL;
|
||||
if (!g4x_raw_crtc_wm_is_valid(crtc_state, level))
|
||||
@ -1708,7 +1702,7 @@ static int vlv_compute_fifo(struct intel_crtc_state *crtc_state)
|
||||
const struct g4x_pipe_wm *raw =
|
||||
&crtc_state->wm.vlv.raw[VLV_WM_LEVEL_PM2];
|
||||
struct vlv_fifo_state *fifo_state = &crtc_state->wm.vlv.fifo_state;
|
||||
unsigned int active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
|
||||
u8 active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
|
||||
int num_active_planes = hweight8(active_planes);
|
||||
const int fifo_size = 511;
|
||||
int fifo_extra, fifo_left = fifo_size;
|
||||
@ -1900,8 +1894,8 @@ static int vlv_compute_pipe_wm(struct intel_atomic_state *state,
|
||||
struct vlv_wm_state *wm_state = &crtc_state->wm.vlv.optimal;
|
||||
const struct vlv_fifo_state *fifo_state =
|
||||
&crtc_state->wm.vlv.fifo_state;
|
||||
int num_active_planes = hweight8(crtc_state->active_planes &
|
||||
~BIT(PLANE_CURSOR));
|
||||
u8 active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
|
||||
int num_active_planes = hweight8(active_planes);
|
||||
bool needs_modeset = drm_atomic_crtc_needs_modeset(&crtc_state->uapi);
|
||||
const struct intel_plane_state *old_plane_state;
|
||||
const struct intel_plane_state *new_plane_state;
|
||||
@ -2253,9 +2247,8 @@ static void vlv_optimize_watermarks(struct intel_atomic_state *state,
|
||||
mutex_unlock(&dev_priv->wm.wm_mutex);
|
||||
}
|
||||
|
||||
static void i965_update_wm(struct intel_crtc *unused_crtc)
|
||||
static void i965_update_wm(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(unused_crtc->base.dev);
|
||||
struct intel_crtc *crtc;
|
||||
int srwm = 1;
|
||||
int cursor_sr = 16;
|
||||
@ -2329,9 +2322,8 @@ static void i965_update_wm(struct intel_crtc *unused_crtc)
|
||||
|
||||
#undef FW_WM
|
||||
|
||||
static void i9xx_update_wm(struct intel_crtc *unused_crtc)
|
||||
static void i9xx_update_wm(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(unused_crtc->base.dev);
|
||||
const struct intel_watermark_params *wm_info;
|
||||
u32 fwater_lo;
|
||||
u32 fwater_hi;
|
||||
@ -2347,7 +2339,10 @@ static void i9xx_update_wm(struct intel_crtc *unused_crtc)
|
||||
else
|
||||
wm_info = &i830_a_wm_info;
|
||||
|
||||
fifo_size = dev_priv->display.get_fifo_size(dev_priv, PLANE_A);
|
||||
if (DISPLAY_VER(dev_priv) == 2)
|
||||
fifo_size = i830_get_fifo_size(dev_priv, PLANE_A);
|
||||
else
|
||||
fifo_size = i9xx_get_fifo_size(dev_priv, PLANE_A);
|
||||
crtc = intel_get_crtc_for_plane(dev_priv, PLANE_A);
|
||||
if (intel_crtc_active(crtc)) {
|
||||
const struct drm_display_mode *pipe_mode =
|
||||
@ -2374,7 +2369,10 @@ static void i9xx_update_wm(struct intel_crtc *unused_crtc)
|
||||
if (DISPLAY_VER(dev_priv) == 2)
|
||||
wm_info = &i830_bc_wm_info;
|
||||
|
||||
fifo_size = dev_priv->display.get_fifo_size(dev_priv, PLANE_B);
|
||||
if (DISPLAY_VER(dev_priv) == 2)
|
||||
fifo_size = i830_get_fifo_size(dev_priv, PLANE_B);
|
||||
else
|
||||
fifo_size = i9xx_get_fifo_size(dev_priv, PLANE_B);
|
||||
crtc = intel_get_crtc_for_plane(dev_priv, PLANE_B);
|
||||
if (intel_crtc_active(crtc)) {
|
||||
const struct drm_display_mode *pipe_mode =
|
||||
@ -2475,9 +2473,8 @@ static void i9xx_update_wm(struct intel_crtc *unused_crtc)
|
||||
intel_set_memory_cxsr(dev_priv, true);
|
||||
}
|
||||
|
||||
static void i845_update_wm(struct intel_crtc *unused_crtc)
|
||||
static void i845_update_wm(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(unused_crtc->base.dev);
|
||||
struct intel_crtc *crtc;
|
||||
const struct drm_display_mode *pipe_mode;
|
||||
u32 fwater_lo;
|
||||
@ -2490,7 +2487,7 @@ static void i845_update_wm(struct intel_crtc *unused_crtc)
|
||||
pipe_mode = &crtc->config->hw.pipe_mode;
|
||||
planea_wm = intel_calculate_wm(pipe_mode->crtc_clock,
|
||||
&i845_wm_info,
|
||||
dev_priv->display.get_fifo_size(dev_priv, PLANE_A),
|
||||
i845_get_fifo_size(dev_priv, PLANE_A),
|
||||
4, pessimal_latency_ns);
|
||||
fwater_lo = intel_uncore_read(&dev_priv->uncore, FW_BLC) & ~0xfff;
|
||||
fwater_lo |= (3<<8) | planea_wm;
|
||||
@ -2859,6 +2856,7 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv,
|
||||
u32 val;
|
||||
int ret, i;
|
||||
int level, max_level = ilk_wm_max_level(dev_priv);
|
||||
int mult = IS_DG2(dev_priv) ? 2 : 1;
|
||||
|
||||
/* read the first set of memory latencies[0:3] */
|
||||
val = 0; /* data0 to be programmed to 0 for first set */
|
||||
@ -2872,13 +2870,13 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv,
|
||||
return;
|
||||
}
|
||||
|
||||
wm[0] = val & GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[1] = (val >> GEN9_MEM_LATENCY_LEVEL_1_5_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[2] = (val >> GEN9_MEM_LATENCY_LEVEL_2_6_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[3] = (val >> GEN9_MEM_LATENCY_LEVEL_3_7_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[0] = (val & GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
wm[1] = ((val >> GEN9_MEM_LATENCY_LEVEL_1_5_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
wm[2] = ((val >> GEN9_MEM_LATENCY_LEVEL_2_6_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
wm[3] = ((val >> GEN9_MEM_LATENCY_LEVEL_3_7_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
|
||||
/* read the second set of memory latencies[4:7] */
|
||||
val = 1; /* data0 to be programmed to 1 for second set */
|
||||
@ -2891,13 +2889,13 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv,
|
||||
return;
|
||||
}
|
||||
|
||||
wm[4] = val & GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[5] = (val >> GEN9_MEM_LATENCY_LEVEL_1_5_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[6] = (val >> GEN9_MEM_LATENCY_LEVEL_2_6_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[7] = (val >> GEN9_MEM_LATENCY_LEVEL_3_7_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK;
|
||||
wm[4] = (val & GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
wm[5] = ((val >> GEN9_MEM_LATENCY_LEVEL_1_5_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
wm[6] = ((val >> GEN9_MEM_LATENCY_LEVEL_2_6_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
wm[7] = ((val >> GEN9_MEM_LATENCY_LEVEL_3_7_SHIFT) &
|
||||
GEN9_MEM_LATENCY_LEVEL_MASK) * mult;
|
||||
|
||||
/*
|
||||
* If a level n (n > 1) has a 0us latency, all levels m (m >= n)
|
||||
@ -6832,7 +6830,8 @@ void g4x_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
for_each_plane_id_on_crtc(crtc, plane_id)
|
||||
raw->plane[plane_id] = active->wm.plane[plane_id];
|
||||
|
||||
if (++level > max_level)
|
||||
level = G4X_WM_LEVEL_SR;
|
||||
if (level > max_level)
|
||||
goto out;
|
||||
|
||||
raw = &crtc_state->wm.g4x.raw[level];
|
||||
@ -6841,7 +6840,8 @@ void g4x_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
raw->plane[PLANE_SPRITE0] = 0;
|
||||
raw->fbc = active->sr.fbc;
|
||||
|
||||
if (++level > max_level)
|
||||
level = G4X_WM_LEVEL_HPLL;
|
||||
if (level > max_level)
|
||||
goto out;
|
||||
|
||||
raw = &crtc_state->wm.g4x.raw[level];
|
||||
@ -6850,6 +6850,7 @@ void g4x_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
raw->plane[PLANE_SPRITE0] = 0;
|
||||
raw->fbc = active->hpll.fbc;
|
||||
|
||||
level++;
|
||||
out:
|
||||
for_each_plane_id_on_crtc(crtc, plane_id)
|
||||
g4x_raw_plane_wm_set(crtc_state, level,
|
||||
@ -7129,47 +7130,6 @@ void ilk_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
!(intel_uncore_read(&dev_priv->uncore, DISP_ARB_CTL) & DISP_FBC_WM_DIS);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_update_watermarks - update FIFO watermark values based on current modes
|
||||
* @crtc: the #intel_crtc on which to compute the WM
|
||||
*
|
||||
* Calculate watermark values for the various WM regs based on current mode
|
||||
* and plane configuration.
|
||||
*
|
||||
* There are several cases to deal with here:
|
||||
* - normal (i.e. non-self-refresh)
|
||||
* - self-refresh (SR) mode
|
||||
* - lines are large relative to FIFO size (buffer can hold up to 2)
|
||||
* - lines are small relative to FIFO size (buffer can hold more than 2
|
||||
* lines), so need to account for TLB latency
|
||||
*
|
||||
* The normal calculation is:
|
||||
* watermark = dotclock * bytes per pixel * latency
|
||||
* where latency is platform & configuration dependent (we assume pessimal
|
||||
* values here).
|
||||
*
|
||||
* The SR calculation is:
|
||||
* watermark = (trunc(latency/line time)+1) * surface width *
|
||||
* bytes per pixel
|
||||
* where
|
||||
* line time = htotal / dotclock
|
||||
* surface width = hdisplay for normal plane and 64 for cursor
|
||||
* and latency is assumed to be high, as above.
|
||||
*
|
||||
* The final value programmed to the register should always be rounded up,
|
||||
* and include an extra 2 entries to account for clock crossings.
|
||||
*
|
||||
* We don't use the sprite, so we can ignore that. And on Crestline we have
|
||||
* to set the non-SR watermarks to 8.
|
||||
*/
|
||||
void intel_update_watermarks(struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
|
||||
if (dev_priv->display.update_wm)
|
||||
dev_priv->display.update_wm(crtc);
|
||||
}
|
||||
|
||||
void intel_enable_ipc(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
u32 val;
|
||||
@ -7909,7 +7869,7 @@ static void i830_init_clock_gating(struct drm_i915_private *dev_priv)
|
||||
|
||||
void intel_init_clock_gating(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
dev_priv->display.init_clock_gating(dev_priv);
|
||||
dev_priv->clock_gating_funcs->init_clock_gating(dev_priv);
|
||||
}
|
||||
|
||||
void intel_suspend_hw(struct drm_i915_private *dev_priv)
|
||||
@ -7924,6 +7884,36 @@ static void nop_init_clock_gating(struct drm_i915_private *dev_priv)
|
||||
"No clock gating settings or workarounds applied.\n");
|
||||
}
|
||||
|
||||
#define CG_FUNCS(platform) \
|
||||
static const struct drm_i915_clock_gating_funcs platform##_clock_gating_funcs = { \
|
||||
.init_clock_gating = platform##_init_clock_gating, \
|
||||
}
|
||||
|
||||
CG_FUNCS(adlp);
|
||||
CG_FUNCS(dg1);
|
||||
CG_FUNCS(gen12lp);
|
||||
CG_FUNCS(icl);
|
||||
CG_FUNCS(cfl);
|
||||
CG_FUNCS(skl);
|
||||
CG_FUNCS(kbl);
|
||||
CG_FUNCS(bxt);
|
||||
CG_FUNCS(glk);
|
||||
CG_FUNCS(bdw);
|
||||
CG_FUNCS(chv);
|
||||
CG_FUNCS(hsw);
|
||||
CG_FUNCS(ivb);
|
||||
CG_FUNCS(vlv);
|
||||
CG_FUNCS(gen6);
|
||||
CG_FUNCS(ilk);
|
||||
CG_FUNCS(g4x);
|
||||
CG_FUNCS(i965gm);
|
||||
CG_FUNCS(i965g);
|
||||
CG_FUNCS(gen3);
|
||||
CG_FUNCS(i85x);
|
||||
CG_FUNCS(i830);
|
||||
CG_FUNCS(nop);
|
||||
#undef CG_FUNCS
|
||||
|
||||
/**
|
||||
* intel_init_clock_gating_hooks - setup the clock gating hooks
|
||||
* @dev_priv: device private
|
||||
@ -7936,55 +7926,100 @@ static void nop_init_clock_gating(struct drm_i915_private *dev_priv)
|
||||
void intel_init_clock_gating_hooks(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (IS_ALDERLAKE_P(dev_priv))
|
||||
dev_priv->display.init_clock_gating = adlp_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &adlp_clock_gating_funcs;
|
||||
else if (IS_DG1(dev_priv))
|
||||
dev_priv->display.init_clock_gating = dg1_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &dg1_clock_gating_funcs;
|
||||
else if (GRAPHICS_VER(dev_priv) == 12)
|
||||
dev_priv->display.init_clock_gating = gen12lp_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &gen12lp_clock_gating_funcs;
|
||||
else if (GRAPHICS_VER(dev_priv) == 11)
|
||||
dev_priv->display.init_clock_gating = icl_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &icl_clock_gating_funcs;
|
||||
else if (IS_COFFEELAKE(dev_priv) || IS_COMETLAKE(dev_priv))
|
||||
dev_priv->display.init_clock_gating = cfl_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &cfl_clock_gating_funcs;
|
||||
else if (IS_SKYLAKE(dev_priv))
|
||||
dev_priv->display.init_clock_gating = skl_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &skl_clock_gating_funcs;
|
||||
else if (IS_KABYLAKE(dev_priv))
|
||||
dev_priv->display.init_clock_gating = kbl_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &kbl_clock_gating_funcs;
|
||||
else if (IS_BROXTON(dev_priv))
|
||||
dev_priv->display.init_clock_gating = bxt_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &bxt_clock_gating_funcs;
|
||||
else if (IS_GEMINILAKE(dev_priv))
|
||||
dev_priv->display.init_clock_gating = glk_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &glk_clock_gating_funcs;
|
||||
else if (IS_BROADWELL(dev_priv))
|
||||
dev_priv->display.init_clock_gating = bdw_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &bdw_clock_gating_funcs;
|
||||
else if (IS_CHERRYVIEW(dev_priv))
|
||||
dev_priv->display.init_clock_gating = chv_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &chv_clock_gating_funcs;
|
||||
else if (IS_HASWELL(dev_priv))
|
||||
dev_priv->display.init_clock_gating = hsw_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &hsw_clock_gating_funcs;
|
||||
else if (IS_IVYBRIDGE(dev_priv))
|
||||
dev_priv->display.init_clock_gating = ivb_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &ivb_clock_gating_funcs;
|
||||
else if (IS_VALLEYVIEW(dev_priv))
|
||||
dev_priv->display.init_clock_gating = vlv_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &vlv_clock_gating_funcs;
|
||||
else if (GRAPHICS_VER(dev_priv) == 6)
|
||||
dev_priv->display.init_clock_gating = gen6_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &gen6_clock_gating_funcs;
|
||||
else if (GRAPHICS_VER(dev_priv) == 5)
|
||||
dev_priv->display.init_clock_gating = ilk_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &ilk_clock_gating_funcs;
|
||||
else if (IS_G4X(dev_priv))
|
||||
dev_priv->display.init_clock_gating = g4x_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &g4x_clock_gating_funcs;
|
||||
else if (IS_I965GM(dev_priv))
|
||||
dev_priv->display.init_clock_gating = i965gm_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &i965gm_clock_gating_funcs;
|
||||
else if (IS_I965G(dev_priv))
|
||||
dev_priv->display.init_clock_gating = i965g_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &i965g_clock_gating_funcs;
|
||||
else if (GRAPHICS_VER(dev_priv) == 3)
|
||||
dev_priv->display.init_clock_gating = gen3_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &gen3_clock_gating_funcs;
|
||||
else if (IS_I85X(dev_priv) || IS_I865G(dev_priv))
|
||||
dev_priv->display.init_clock_gating = i85x_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &i85x_clock_gating_funcs;
|
||||
else if (GRAPHICS_VER(dev_priv) == 2)
|
||||
dev_priv->display.init_clock_gating = i830_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &i830_clock_gating_funcs;
|
||||
else {
|
||||
MISSING_CASE(INTEL_DEVID(dev_priv));
|
||||
dev_priv->display.init_clock_gating = nop_init_clock_gating;
|
||||
dev_priv->clock_gating_funcs = &nop_clock_gating_funcs;
|
||||
}
|
||||
}
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs skl_wm_funcs = {
|
||||
.compute_global_watermarks = skl_compute_wm,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs ilk_wm_funcs = {
|
||||
.compute_pipe_wm = ilk_compute_pipe_wm,
|
||||
.compute_intermediate_wm = ilk_compute_intermediate_wm,
|
||||
.initial_watermarks = ilk_initial_watermarks,
|
||||
.optimize_watermarks = ilk_optimize_watermarks,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs vlv_wm_funcs = {
|
||||
.compute_pipe_wm = vlv_compute_pipe_wm,
|
||||
.compute_intermediate_wm = vlv_compute_intermediate_wm,
|
||||
.initial_watermarks = vlv_initial_watermarks,
|
||||
.optimize_watermarks = vlv_optimize_watermarks,
|
||||
.atomic_update_watermarks = vlv_atomic_update_fifo,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs g4x_wm_funcs = {
|
||||
.compute_pipe_wm = g4x_compute_pipe_wm,
|
||||
.compute_intermediate_wm = g4x_compute_intermediate_wm,
|
||||
.initial_watermarks = g4x_initial_watermarks,
|
||||
.optimize_watermarks = g4x_optimize_watermarks,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs pnv_wm_funcs = {
|
||||
.update_wm = pnv_update_wm,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs i965_wm_funcs = {
|
||||
.update_wm = i965_update_wm,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs i9xx_wm_funcs = {
|
||||
.update_wm = i9xx_update_wm,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs i845_wm_funcs = {
|
||||
.update_wm = i845_update_wm,
|
||||
};
|
||||
|
||||
static const struct drm_i915_wm_disp_funcs nop_funcs = {
|
||||
};
|
||||
|
||||
/* Set up chip specific power management-related functions */
|
||||
void intel_init_pm(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
@ -8000,7 +8035,7 @@ void intel_init_pm(struct drm_i915_private *dev_priv)
|
||||
/* For FIFO watermark updates */
|
||||
if (DISPLAY_VER(dev_priv) >= 9) {
|
||||
skl_setup_wm_latency(dev_priv);
|
||||
dev_priv->display.compute_global_watermarks = skl_compute_wm;
|
||||
dev_priv->wm_disp = &skl_wm_funcs;
|
||||
} else if (HAS_PCH_SPLIT(dev_priv)) {
|
||||
ilk_setup_wm_latency(dev_priv);
|
||||
|
||||
@ -8008,31 +8043,19 @@ void intel_init_pm(struct drm_i915_private *dev_priv)
|
||||
dev_priv->wm.spr_latency[1] && dev_priv->wm.cur_latency[1]) ||
|
||||
(DISPLAY_VER(dev_priv) != 5 && dev_priv->wm.pri_latency[0] &&
|
||||
dev_priv->wm.spr_latency[0] && dev_priv->wm.cur_latency[0])) {
|
||||
dev_priv->display.compute_pipe_wm = ilk_compute_pipe_wm;
|
||||
dev_priv->display.compute_intermediate_wm =
|
||||
ilk_compute_intermediate_wm;
|
||||
dev_priv->display.initial_watermarks =
|
||||
ilk_initial_watermarks;
|
||||
dev_priv->display.optimize_watermarks =
|
||||
ilk_optimize_watermarks;
|
||||
dev_priv->wm_disp = &ilk_wm_funcs;
|
||||
} else {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"Failed to read display plane latency. "
|
||||
"Disable CxSR\n");
|
||||
dev_priv->wm_disp = &nop_funcs;
|
||||
}
|
||||
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
|
||||
vlv_setup_wm_latency(dev_priv);
|
||||
dev_priv->display.compute_pipe_wm = vlv_compute_pipe_wm;
|
||||
dev_priv->display.compute_intermediate_wm = vlv_compute_intermediate_wm;
|
||||
dev_priv->display.initial_watermarks = vlv_initial_watermarks;
|
||||
dev_priv->display.optimize_watermarks = vlv_optimize_watermarks;
|
||||
dev_priv->display.atomic_update_watermarks = vlv_atomic_update_fifo;
|
||||
dev_priv->wm_disp = &vlv_wm_funcs;
|
||||
} else if (IS_G4X(dev_priv)) {
|
||||
g4x_setup_wm_latency(dev_priv);
|
||||
dev_priv->display.compute_pipe_wm = g4x_compute_pipe_wm;
|
||||
dev_priv->display.compute_intermediate_wm = g4x_compute_intermediate_wm;
|
||||
dev_priv->display.initial_watermarks = g4x_initial_watermarks;
|
||||
dev_priv->display.optimize_watermarks = g4x_optimize_watermarks;
|
||||
dev_priv->wm_disp = &g4x_wm_funcs;
|
||||
} else if (IS_PINEVIEW(dev_priv)) {
|
||||
if (!intel_get_cxsr_latency(!IS_MOBILE(dev_priv),
|
||||
dev_priv->is_ddr3,
|
||||
@ -8046,25 +8069,22 @@ void intel_init_pm(struct drm_i915_private *dev_priv)
|
||||
dev_priv->fsb_freq, dev_priv->mem_freq);
|
||||
/* Disable CxSR and never update its watermark again */
|
||||
intel_set_memory_cxsr(dev_priv, false);
|
||||
dev_priv->display.update_wm = NULL;
|
||||
dev_priv->wm_disp = &nop_funcs;
|
||||
} else
|
||||
dev_priv->display.update_wm = pnv_update_wm;
|
||||
dev_priv->wm_disp = &pnv_wm_funcs;
|
||||
} else if (DISPLAY_VER(dev_priv) == 4) {
|
||||
dev_priv->display.update_wm = i965_update_wm;
|
||||
dev_priv->wm_disp = &i965_wm_funcs;
|
||||
} else if (DISPLAY_VER(dev_priv) == 3) {
|
||||
dev_priv->display.update_wm = i9xx_update_wm;
|
||||
dev_priv->display.get_fifo_size = i9xx_get_fifo_size;
|
||||
dev_priv->wm_disp = &i9xx_wm_funcs;
|
||||
} else if (DISPLAY_VER(dev_priv) == 2) {
|
||||
if (INTEL_NUM_PIPES(dev_priv) == 1) {
|
||||
dev_priv->display.update_wm = i845_update_wm;
|
||||
dev_priv->display.get_fifo_size = i845_get_fifo_size;
|
||||
} else {
|
||||
dev_priv->display.update_wm = i9xx_update_wm;
|
||||
dev_priv->display.get_fifo_size = i830_get_fifo_size;
|
||||
}
|
||||
if (INTEL_NUM_PIPES(dev_priv) == 1)
|
||||
dev_priv->wm_disp = &i845_wm_funcs;
|
||||
else
|
||||
dev_priv->wm_disp = &i9xx_wm_funcs;
|
||||
} else {
|
||||
drm_err(&dev_priv->drm,
|
||||
"unexpected fall-through in %s\n", __func__);
|
||||
dev_priv->wm_disp = &nop_funcs;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -8,7 +8,6 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "display/intel_bw.h"
|
||||
#include "display/intel_display.h"
|
||||
#include "display/intel_global_state.h"
|
||||
|
||||
@ -19,6 +18,7 @@ struct drm_device;
|
||||
struct drm_i915_private;
|
||||
struct i915_request;
|
||||
struct intel_atomic_state;
|
||||
struct intel_bw_state;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_plane;
|
||||
@ -29,7 +29,6 @@ struct skl_wm_level;
|
||||
void intel_init_clock_gating(struct drm_i915_private *dev_priv);
|
||||
void intel_suspend_hw(struct drm_i915_private *dev_priv);
|
||||
int ilk_wm_max_level(const struct drm_i915_private *dev_priv);
|
||||
void intel_update_watermarks(struct intel_crtc *crtc);
|
||||
void intel_init_pm(struct drm_i915_private *dev_priv);
|
||||
void intel_init_clock_gating_hooks(struct drm_i915_private *dev_priv);
|
||||
void intel_pm_setup(struct drm_i915_private *dev_priv);
|
||||
|
@ -8,8 +8,6 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "display/intel_display.h"
|
||||
|
||||
#include "intel_wakeref.h"
|
||||
|
||||
#include "i915_utils.h"
|
||||
|
@ -36,6 +36,12 @@
|
||||
|
||||
#define __raw_posting_read(...) ((void)__raw_uncore_read32(__VA_ARGS__))
|
||||
|
||||
static void
|
||||
fw_domains_get(struct intel_uncore *uncore, enum forcewake_domains fw_domains)
|
||||
{
|
||||
uncore->fw_get_funcs->force_wake_get(uncore, fw_domains);
|
||||
}
|
||||
|
||||
void
|
||||
intel_uncore_mmio_debug_init_early(struct intel_uncore_mmio_debug *mmio_debug)
|
||||
{
|
||||
@ -248,7 +254,7 @@ fw_domain_put(const struct intel_uncore_forcewake_domain *d)
|
||||
}
|
||||
|
||||
static void
|
||||
fw_domains_get(struct intel_uncore *uncore, enum forcewake_domains fw_domains)
|
||||
fw_domains_get_normal(struct intel_uncore *uncore, enum forcewake_domains fw_domains)
|
||||
{
|
||||
struct intel_uncore_forcewake_domain *d;
|
||||
unsigned int tmp;
|
||||
@ -340,7 +346,7 @@ static void __gen6_gt_wait_for_thread_c0(struct intel_uncore *uncore)
|
||||
static void fw_domains_get_with_thread_status(struct intel_uncore *uncore,
|
||||
enum forcewake_domains fw_domains)
|
||||
{
|
||||
fw_domains_get(uncore, fw_domains);
|
||||
fw_domains_get_normal(uncore, fw_domains);
|
||||
|
||||
/* WaRsForcewakeWaitTC0:snb,ivb,hsw,bdw,vlv */
|
||||
__gen6_gt_wait_for_thread_c0(uncore);
|
||||
@ -396,7 +402,7 @@ intel_uncore_fw_release_timer(struct hrtimer *timer)
|
||||
|
||||
GEM_BUG_ON(!domain->wake_count);
|
||||
if (--domain->wake_count == 0)
|
||||
uncore->funcs.force_wake_put(uncore, domain->mask);
|
||||
fw_domains_put(uncore, domain->mask);
|
||||
|
||||
spin_unlock_irqrestore(&uncore->lock, irqflags);
|
||||
|
||||
@ -454,7 +460,7 @@ intel_uncore_forcewake_reset(struct intel_uncore *uncore)
|
||||
|
||||
fw = uncore->fw_domains_active;
|
||||
if (fw)
|
||||
uncore->funcs.force_wake_put(uncore, fw);
|
||||
fw_domains_put(uncore, fw);
|
||||
|
||||
fw_domains_reset(uncore, uncore->fw_domains);
|
||||
assert_forcewakes_inactive(uncore);
|
||||
@ -562,7 +568,7 @@ static void forcewake_early_sanitize(struct intel_uncore *uncore,
|
||||
intel_uncore_forcewake_reset(uncore);
|
||||
if (restore_forcewake) {
|
||||
spin_lock_irq(&uncore->lock);
|
||||
uncore->funcs.force_wake_get(uncore, restore_forcewake);
|
||||
fw_domains_get(uncore, restore_forcewake);
|
||||
|
||||
if (intel_uncore_has_fifo(uncore))
|
||||
uncore->fifo_count = fifo_free_entries(uncore);
|
||||
@ -623,7 +629,7 @@ static void __intel_uncore_forcewake_get(struct intel_uncore *uncore,
|
||||
}
|
||||
|
||||
if (fw_domains)
|
||||
uncore->funcs.force_wake_get(uncore, fw_domains);
|
||||
fw_domains_get(uncore, fw_domains);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -644,7 +650,7 @@ void intel_uncore_forcewake_get(struct intel_uncore *uncore,
|
||||
{
|
||||
unsigned long irqflags;
|
||||
|
||||
if (!uncore->funcs.force_wake_get)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
assert_rpm_wakelock_held(uncore->rpm);
|
||||
@ -711,7 +717,7 @@ void intel_uncore_forcewake_get__locked(struct intel_uncore *uncore,
|
||||
{
|
||||
lockdep_assert_held(&uncore->lock);
|
||||
|
||||
if (!uncore->funcs.force_wake_get)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
__intel_uncore_forcewake_get(uncore, fw_domains);
|
||||
@ -733,7 +739,7 @@ static void __intel_uncore_forcewake_put(struct intel_uncore *uncore,
|
||||
continue;
|
||||
}
|
||||
|
||||
uncore->funcs.force_wake_put(uncore, domain->mask);
|
||||
fw_domains_put(uncore, domain->mask);
|
||||
}
|
||||
}
|
||||
|
||||
@ -750,7 +756,7 @@ void intel_uncore_forcewake_put(struct intel_uncore *uncore,
|
||||
{
|
||||
unsigned long irqflags;
|
||||
|
||||
if (!uncore->funcs.force_wake_put)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&uncore->lock, irqflags);
|
||||
@ -769,7 +775,7 @@ void intel_uncore_forcewake_flush(struct intel_uncore *uncore,
|
||||
struct intel_uncore_forcewake_domain *domain;
|
||||
unsigned int tmp;
|
||||
|
||||
if (!uncore->funcs.force_wake_put)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
fw_domains &= uncore->fw_domains;
|
||||
@ -793,7 +799,7 @@ void intel_uncore_forcewake_put__locked(struct intel_uncore *uncore,
|
||||
{
|
||||
lockdep_assert_held(&uncore->lock);
|
||||
|
||||
if (!uncore->funcs.force_wake_put)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
__intel_uncore_forcewake_put(uncore, fw_domains);
|
||||
@ -801,7 +807,7 @@ void intel_uncore_forcewake_put__locked(struct intel_uncore *uncore,
|
||||
|
||||
void assert_forcewakes_inactive(struct intel_uncore *uncore)
|
||||
{
|
||||
if (!uncore->funcs.force_wake_get)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
drm_WARN(&uncore->i915->drm, uncore->fw_domains_active,
|
||||
@ -818,7 +824,7 @@ void assert_forcewakes_active(struct intel_uncore *uncore,
|
||||
if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM))
|
||||
return;
|
||||
|
||||
if (!uncore->funcs.force_wake_get)
|
||||
if (!uncore->fw_get_funcs)
|
||||
return;
|
||||
|
||||
spin_lock_irq(&uncore->lock);
|
||||
@ -1605,7 +1611,7 @@ static noinline void ___force_wake_auto(struct intel_uncore *uncore,
|
||||
for_each_fw_domain_masked(domain, fw_domains, uncore, tmp)
|
||||
fw_domain_arm_timer(domain);
|
||||
|
||||
uncore->funcs.force_wake_get(uncore, fw_domains);
|
||||
fw_domains_get(uncore, fw_domains);
|
||||
}
|
||||
|
||||
static inline void __force_wake_auto(struct intel_uncore *uncore,
|
||||
@ -1866,6 +1872,18 @@ static void intel_uncore_fw_domains_fini(struct intel_uncore *uncore)
|
||||
fw_domain_fini(uncore, d->id);
|
||||
}
|
||||
|
||||
static const struct intel_uncore_fw_get uncore_get_fallback = {
|
||||
.force_wake_get = fw_domains_get_with_fallback
|
||||
};
|
||||
|
||||
static const struct intel_uncore_fw_get uncore_get_normal = {
|
||||
.force_wake_get = fw_domains_get_normal,
|
||||
};
|
||||
|
||||
static const struct intel_uncore_fw_get uncore_get_thread_status = {
|
||||
.force_wake_get = fw_domains_get_with_thread_status
|
||||
};
|
||||
|
||||
static int intel_uncore_fw_domains_init(struct intel_uncore *uncore)
|
||||
{
|
||||
struct drm_i915_private *i915 = uncore->i915;
|
||||
@ -1881,8 +1899,7 @@ static int intel_uncore_fw_domains_init(struct intel_uncore *uncore)
|
||||
intel_engine_mask_t emask = INTEL_INFO(i915)->platform_engine_mask;
|
||||
int i;
|
||||
|
||||
uncore->funcs.force_wake_get = fw_domains_get_with_fallback;
|
||||
uncore->funcs.force_wake_put = fw_domains_put;
|
||||
uncore->fw_get_funcs = &uncore_get_fallback;
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_RENDER,
|
||||
FORCEWAKE_RENDER_GEN9,
|
||||
FORCEWAKE_ACK_RENDER_GEN9);
|
||||
@ -1907,8 +1924,7 @@ static int intel_uncore_fw_domains_init(struct intel_uncore *uncore)
|
||||
FORCEWAKE_ACK_MEDIA_VEBOX_GEN11(i));
|
||||
}
|
||||
} else if (IS_GRAPHICS_VER(i915, 9, 10)) {
|
||||
uncore->funcs.force_wake_get = fw_domains_get_with_fallback;
|
||||
uncore->funcs.force_wake_put = fw_domains_put;
|
||||
uncore->fw_get_funcs = &uncore_get_fallback;
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_RENDER,
|
||||
FORCEWAKE_RENDER_GEN9,
|
||||
FORCEWAKE_ACK_RENDER_GEN9);
|
||||
@ -1918,16 +1934,13 @@ static int intel_uncore_fw_domains_init(struct intel_uncore *uncore)
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_MEDIA,
|
||||
FORCEWAKE_MEDIA_GEN9, FORCEWAKE_ACK_MEDIA_GEN9);
|
||||
} else if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) {
|
||||
uncore->funcs.force_wake_get = fw_domains_get;
|
||||
uncore->funcs.force_wake_put = fw_domains_put;
|
||||
uncore->fw_get_funcs = &uncore_get_normal;
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_RENDER,
|
||||
FORCEWAKE_VLV, FORCEWAKE_ACK_VLV);
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_MEDIA,
|
||||
FORCEWAKE_MEDIA_VLV, FORCEWAKE_ACK_MEDIA_VLV);
|
||||
} else if (IS_HASWELL(i915) || IS_BROADWELL(i915)) {
|
||||
uncore->funcs.force_wake_get =
|
||||
fw_domains_get_with_thread_status;
|
||||
uncore->funcs.force_wake_put = fw_domains_put;
|
||||
uncore->fw_get_funcs = &uncore_get_thread_status;
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_RENDER,
|
||||
FORCEWAKE_MT, FORCEWAKE_ACK_HSW);
|
||||
} else if (IS_IVYBRIDGE(i915)) {
|
||||
@ -1942,9 +1955,7 @@ static int intel_uncore_fw_domains_init(struct intel_uncore *uncore)
|
||||
* (correctly) interpreted by the test below as MT
|
||||
* forcewake being disabled.
|
||||
*/
|
||||
uncore->funcs.force_wake_get =
|
||||
fw_domains_get_with_thread_status;
|
||||
uncore->funcs.force_wake_put = fw_domains_put;
|
||||
uncore->fw_get_funcs = &uncore_get_thread_status;
|
||||
|
||||
/* We need to init first for ECOBUS access and then
|
||||
* determine later if we want to reinit, in case of MT access is
|
||||
@ -1975,9 +1986,7 @@ static int intel_uncore_fw_domains_init(struct intel_uncore *uncore)
|
||||
FORCEWAKE, FORCEWAKE_ACK);
|
||||
}
|
||||
} else if (GRAPHICS_VER(i915) == 6) {
|
||||
uncore->funcs.force_wake_get =
|
||||
fw_domains_get_with_thread_status;
|
||||
uncore->funcs.force_wake_put = fw_domains_put;
|
||||
uncore->fw_get_funcs = &uncore_get_thread_status;
|
||||
fw_domain_init(uncore, FW_DOMAIN_ID_RENDER,
|
||||
FORCEWAKE, FORCEWAKE_ACK);
|
||||
}
|
||||
@ -2186,8 +2195,7 @@ int intel_uncore_init_mmio(struct intel_uncore *uncore)
|
||||
}
|
||||
|
||||
/* make sure fw funcs are set if and only if we have fw*/
|
||||
GEM_BUG_ON(intel_uncore_has_forcewake(uncore) != !!uncore->funcs.force_wake_get);
|
||||
GEM_BUG_ON(intel_uncore_has_forcewake(uncore) != !!uncore->funcs.force_wake_put);
|
||||
GEM_BUG_ON(intel_uncore_has_forcewake(uncore) != !!uncore->fw_get_funcs);
|
||||
GEM_BUG_ON(intel_uncore_has_forcewake(uncore) != !!uncore->funcs.read_fw_domains);
|
||||
GEM_BUG_ON(intel_uncore_has_forcewake(uncore) != !!uncore->funcs.write_fw_domains);
|
||||
|
||||
|
@ -84,12 +84,12 @@ enum forcewake_domains {
|
||||
FORCEWAKE_ALL = BIT(FW_DOMAIN_ID_COUNT) - 1,
|
||||
};
|
||||
|
||||
struct intel_uncore_funcs {
|
||||
struct intel_uncore_fw_get {
|
||||
void (*force_wake_get)(struct intel_uncore *uncore,
|
||||
enum forcewake_domains domains);
|
||||
void (*force_wake_put)(struct intel_uncore *uncore,
|
||||
enum forcewake_domains domains);
|
||||
};
|
||||
|
||||
struct intel_uncore_funcs {
|
||||
enum forcewake_domains (*read_fw_domains)(struct intel_uncore *uncore,
|
||||
i915_reg_t r);
|
||||
enum forcewake_domains (*write_fw_domains)(struct intel_uncore *uncore,
|
||||
@ -137,6 +137,7 @@ struct intel_uncore {
|
||||
unsigned int fw_domains_table_entries;
|
||||
|
||||
struct notifier_block pmic_bus_access_nb;
|
||||
const struct intel_uncore_fw_get *fw_get_funcs;
|
||||
struct intel_uncore_funcs funcs;
|
||||
|
||||
unsigned int fifo_count;
|
||||
|
@ -47,6 +47,8 @@ static bool use_bgrt = true;
|
||||
static bool request_mem_succeeded = false;
|
||||
static u64 mem_flags = EFI_MEMORY_WC | EFI_MEMORY_UC;
|
||||
|
||||
static struct pci_dev *efifb_pci_dev; /* dev with BAR covering the efifb */
|
||||
|
||||
static struct fb_var_screeninfo efifb_defined = {
|
||||
.activate = FB_ACTIVATE_NOW,
|
||||
.height = -1,
|
||||
@ -243,6 +245,9 @@ static inline void efifb_show_boot_graphics(struct fb_info *info) {}
|
||||
|
||||
static void efifb_destroy(struct fb_info *info)
|
||||
{
|
||||
if (efifb_pci_dev)
|
||||
pm_runtime_put(&efifb_pci_dev->dev);
|
||||
|
||||
if (info->screen_base) {
|
||||
if (mem_flags & (EFI_MEMORY_UC | EFI_MEMORY_WC))
|
||||
iounmap(info->screen_base);
|
||||
@ -333,7 +338,6 @@ ATTRIBUTE_GROUPS(efifb);
|
||||
|
||||
static bool pci_dev_disabled; /* FB base matches BAR of a disabled device */
|
||||
|
||||
static struct pci_dev *efifb_pci_dev; /* dev with BAR covering the efifb */
|
||||
static struct resource *bar_resource;
|
||||
static u64 bar_offset;
|
||||
|
||||
@ -569,17 +573,22 @@ static int efifb_probe(struct platform_device *dev)
|
||||
pr_err("efifb: cannot allocate colormap\n");
|
||||
goto err_groups;
|
||||
}
|
||||
|
||||
if (efifb_pci_dev)
|
||||
WARN_ON(pm_runtime_get_sync(&efifb_pci_dev->dev) < 0);
|
||||
|
||||
err = register_framebuffer(info);
|
||||
if (err < 0) {
|
||||
pr_err("efifb: cannot register framebuffer\n");
|
||||
goto err_fb_dealoc;
|
||||
goto err_put_rpm_ref;
|
||||
}
|
||||
fb_info(info, "%s frame buffer device\n", info->fix.id);
|
||||
if (efifb_pci_dev)
|
||||
pm_runtime_get_sync(&efifb_pci_dev->dev);
|
||||
return 0;
|
||||
|
||||
err_fb_dealoc:
|
||||
err_put_rpm_ref:
|
||||
if (efifb_pci_dev)
|
||||
pm_runtime_put(&efifb_pci_dev->dev);
|
||||
|
||||
fb_dealloc_cmap(&info->cmap);
|
||||
err_groups:
|
||||
sysfs_remove_groups(&dev->dev.kobj, efifb_groups);
|
||||
@ -603,8 +612,6 @@ static int efifb_remove(struct platform_device *pdev)
|
||||
unregister_framebuffer(info);
|
||||
sysfs_remove_groups(&pdev->dev.kobj, efifb_groups);
|
||||
framebuffer_release(info);
|
||||
if (efifb_pci_dev)
|
||||
pm_runtime_put(&efifb_pci_dev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user