vblank_disable_allowed is only ever 0 or 1, so make it a bool.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
drm_helper_probe_single_connector_modes() can be used to implement
->fill_modes(), not ->probe().
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
If the firmware is not builtin and userspace is not yet running, we can
stall the boot process for a minute whilst the firmware loader times
out. This is contrary to expectations of providing a builtin EDID!
In the process, we can rearrange the code to make the error handling
more resilient and prevent gcc warning about unitialised variables along
the error paths.
v2: Load builtins first, fix gcc second (Jani) and cosmetics (Ville).
v3: Verify that we do not read beyond the end of the fwdata (Ville)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Analog to drm_dev_register(), we now provide drm_dev_unregister() which
does the reverse. drm_dev_put() is still in place and combines the calls
to drm_dev_unregister() and drm_dev_free() so buses don't have to change.
*_get() and *_put() are used for reference-counting in the kernel.
However, drm_dev_put() definitely does not do any kind of ref-counting.
Hence, use the more appropriate *_register(), *_unregister(), *_alloc()
and *_free() names.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The error paths in DRM bus drivers currently leak memory as they don't
correctly revert drm_dev_alloc(). Introduce drm_dev_free() to free DRM
devices which haven't been registered, yet.
We must be careful not to introduce any side-effects with cleanups done in
drm_dev_free(). drm_ht_remove(), drm_ctxbitmap_cleanup() and
drm_gem_destroy() are all fine in that regard.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Try to keep all functions that handle DRM file_operations in drm_fops.c
so internal helpers can be marked static later.
This makes the split between the 3 core files more obvious:
- drm_stub.c: DRM device allocation/destruction and management
- drm_fops.c: DRM file_operations (except for ioctl)
- drm_drv.c: Global DRM init + ioctl handling
Well, ioctl handling is still spread throughout hundreds of source files,
but at least the others are clearly defined this way.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All bus drivers do device setup themselves. This requires us to adjust all
of them if we introduce new core features. Thus, merge all these into a
uniform drm_dev_register() helper.
Note that this removes the drm_lastclose() error path for AGP as it is
horribly broken. Moreover, no bus driver called this in any other error
path either. Instead, we use the recently introduced AGP cleanup helpers.
We also keep a DRIVER_MODESET condition around pci_set_drvdata() to keep
semantics.
[airlied: keep passing flags through so drivers don't oops on load]
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Instead of managing device allocation+initialization in each bus-driver,
we should do that in a central place. drm_fill_in_dev() already does most
of it, but also requires the global drm lock for partial AGP device
registration.
Split both apart so we have a clean device initialization/allocation
phase, and a registration phase.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All drivers embed gem-objects into their own buffer objects. There is no
reason to keep drm_gem_object_alloc(), gem->driver_private and
->gem_init_object() anymore.
New drivers are highly encouraged to do the same. There is no benefit in
allocating gem-objects separately.
Cc: Dave Airlie <airlied@gmail.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Ben Skeggs <skeggsb@gmail.com>
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
There is no reason to keep the gem object separately allocated. nouveau is
the last user of gem_obj->driver_private, so if we embed it, we can get
rid of 8bytes per gem-object.
The implementation follows the radeon driver. bo->gem is only valid, iff
the bo was created via the gem helpers _and_ iff the user holds a valid
gem reference. That is, as the gem object holds a reference to the
nouveau_bo. If you use nouveau_ref() to gain a bo reference, you are not
guaranteed to also hold a gem reference. The gem object might get
destroyed after the last user drops the gem-ref via
drm_gem_object_unreference(). Use drm_gem_object_reference() to gain a
gem-reference.
For debugging, we can use bo->gem.filp != NULL to test whether a gem-bo is
valid. However, this shouldn't be used for real functionality to avoid
gem-internal dependencies.
Note that the implementation follows the previous style. However, we no
longer can check for bo->gem != NULL to test for a valid gem object. This
wasn't done before, so we should be safe now.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When booting with i915.fastboot=1, we always take tha code path and end
up undoing what we're trying to do with adjusted_mode.
Hopefully, as the fastboot hardware readout code is using adjusted_mode
as well, it should be equivalent.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of it just being on the mailing list, let's put Jesse's
explanation next to the code in question.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
During system boot up, by default, the power gate for render, media and
display well still power gated. Normally, BIOS will turn off the power
gate. In the BIOS-less system, the driver need to turn off the power
gate very early during driver load.
v2: Move this to intel_uncore_sanitize to allow it to get call during
resume path. (Daniel)
v3: Remove redundant write 0 to DPIO_CTL, and use DPIO_RESET instead of
just 0x1 (Ville)
Add turn of power gate for display 2d/render well/media well.
v4: Remove toggle cmnreset in intel_uncore_sanitize. Cmnreset should
toggle after CRI clock source has been selected. Jesse DPIO reset patch
which toggle the cmnreset in intel_modeset_init_hw() should handle it.
(Ville)
Signed-off-by: Chon Ming Lee <chon.ming.lee@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
DPIO needs to have common reset de-asserted on soft resets like boot and
S3. In some cases, the BIOS will have done this for us, but it should
be safe to do at runtime as well, as long as we do it when the pipes are
otherwise off.
v2: update bit name to match docs better (Ville)
reset after CRI clock select (Ville)
References: https://bugs.freedesktop.org/show_bug.cgi?id=69166
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
for igt test case.
v2: remove trailing spaces and fix conflicts
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
[danvet:
- make it comipile
- s/IS_HASWELL/HAS_PSR/]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
After applying wait-boost we often find ourselves stuck at higher clocks
than required. The current threshold value requires the GPU to be
continuously and completely idle for 313ms before it is dropped by one
bin. Conversely, we require the GPU to be busy for an average of 90% over
a 84ms period before we upclock. So the current thresholds almost never
downclock the GPU, and respond very slowly to sudden demands for more
power. It is easy to observe that we currently lock into the wrong bin
and both underperform in benchmarks and consume more power than optimal
(just by repeating the task and measuring the different results).
An alternative approach, as discussed in the bspec, is to use a
continuous threshold for upclocking, and an average value for downclocking.
This is good for quickly detecting and reacting to state changes within a
frame, however it fails with the common throttling method of waiting
upon the outstanding frame - at least it is difficult to choose a
threshold that works well at 15,000fps and at 60fps. So continue to use
average busy/idle loads to determine frequency change.
v2: Use 3 power zones to keep frequencies low in steady-state mostly
idle (e.g. scrolling, interactive 2D drawing), and frequencies high
for demanding games. In between those end-states, we use a
fast-reclocking algorithm to converge more quickly on the desired bin.
v3: Bug fixes - make sure we reset adj after switching power zones.
v4: Tune - drop the continuous busy thresholds as it prevents us from
choosing the right frequency for glxgears style swap benchmarks. Instead
the goal is to be able to find the right clocks irrespective of the
wait-boost.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Stéphane Marchesin <stephane.marchesin@gmail.com>
Cc: Owen Taylor <otaylor@redhat.com>
Cc: "Meng, Mengmeng" <mengmeng.meng@intel.com>
Cc: "Zhuang, Lena" <lena.zhuang@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we encounter a situation where the CPU blocks waiting for results
from the GPU, give the GPU a kick to boost its the frequency.
This should work to reduce user interface stalls and to quickly promote
mesa to high frequencies - but the cost is that our requested frequency
stalls high (as we do not idle for long enough before rc6 to start
reducing frequencies, nor are we aggressive at down clocking an
underused GPU). However, this should be mitigated by rc6 itself powering
off the GPU when idle, and that energy use is dependent upon the workload
of the GPU in addition to its frequency (e.g. the math or sampler
functions only consume power when used). Still, this is likely to
adversely affect light workloads.
In particular, this nearly eliminates the highly noticeable wake-up lag
in animations from idle. For example, expose or workspace transitions.
(However, given the situation where we fail to downclock, our requested
frequency is almost always the maximum, except for Baytrail where we
manually downclock upon idling. This often masks the latency of
upclocking after being idle, so animations are typically smooth - at the
cost of increased power consumption.)
Stéphane raised the concern that this will punish good applications and
reward bad applications - but due to the nature of how mesa performs its
client throttling, I believe all mesa applications will be roughly
equally affected. To address this concern, and to prevent applications
like compositors from permanently boosting the RPS state, we ratelimit the
frequency of the wait-boosts each client recieves.
Unfortunately, this techinique is ineffective with Ironlake - which also
has dynamic render power states and suffers just as dramatically. For
Ironlake, the thermal/power headroom is shared with the CPU through
Intelligent Power Sharing and the intel-ips module. This leaves us with
no GPU boost frequencies available when coming out of idle, and due to
hardware limitations we cannot change the arbitration between the CPU and
GPU quickly enough to be effective.
v2: Limit each client to receiving a single boost for each active period.
Tested by QA to only marginally increase power, and to demonstrably
increase throughput in games. No latency measurements yet.
v3: Cater for front-buffer rendering with manual throttling.
v4: Tidy up.
v5: Sadly the compositor needs frequent boosts as it may never idle, but
due to its picking mechanism (using ReadPixels) may require frequent
waits. Those waits, along with the waits for the vrefresh swap, conspire
to keep the GPU at low frequencies despite the interactive latency. To
overcome this we ditch the one-boost-per-active-period and just ratelimit
the number of wait-boosts each client can receive.
Reported-and-tested-by: Paul Neumann <paul104x@yahoo.de>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68716
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Stéphane Marchesin <stephane.marchesin@gmail.com>
Cc: Owen Taylor <otaylor@redhat.com>
Cc: "Meng, Mengmeng" <mengmeng.meng@intel.com>
Cc: "Zhuang, Lena" <lena.zhuang@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[danvet: No extern for function prototypes in headers.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When we switched to always using a timeout in conjunction with
wait_seqno, we lost the ability to detect missed interrupts. Since, we
have had issues with interrupts on a number of generations, and they are
required to be delivered in a timely fashion for a smooth UX, it is
important that we do log errors found in the wild and prevent the
display stalling for upwards of 1s every time the seqno interrupt is
missed.
Rather than continue to fix up the timeouts to work around the interface
impedence in wait_event_*(), open code the combination of
wait_event[_interruptible][_timeout], and use the exposed timer to
poll for seqno should we detect a lost interrupt.
v2: In order to satisfy the debug requirement of logging missed
interrupts with the real world requirments of making machines work even
if interrupts are hosed, we revert to polling after detecting a missed
interrupt.
v3: Throw in a debugfs interface to simulate broken hw not reporting
interrupts.
v4: s/EGAIN/EAGAIN/ (Imre)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Imre Deak <imre.deak@intel.com>
[danvet: Don't use the struct typedef in new code.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We missed adding a few cleanup steps for recent additions.
Reviewer: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch attempts to clean up the ring/IA scaling programming in the
following ways.
1. Fix the comment about the DDR frequency. The math is 266MHz, not
133MHz. Formula was right, docs are wrong.
2. Mask the DCLK register since I don't know how it is defined on future
platforms.
3. use mult_frac instead of magic math.
This helps for future platform enabling.
v2: Actually use the right patch. The v1 was a mix of things, none of
which was right. Note that due to rounding, we actually get different
values (slightly higher) for the effective ring frequency.
v3: Use 1.25 instead of 1.33 as the original code did. (Jesse)
CC: Jesse Barnes <jbarnes@virtuousgeek.org>
CC: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we ever end up doing the retry loop due to bandwidth constraints, we
would rewrite pipe_src_{w,n} based on adjusted_mode timings. But by that
time the encoder may have already replaced the adjusted_mode with a
fixed panel mode, which would then corrupt pipe_src_{w,h}.
v2: Use requested_mode and slap on a big comment from Daniel
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This workaround is described in the mode set sequence documentation.
When enabling planes for the second pipe, we need to wait for 2
vblanks on the first pipe. This should solve "a flash of screen
corruption if planes are enabled on second/third pipe during the time
that big FIFO mode is exiting". Watermarks are fun :)
v2: Save indentation levels
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Refactor the plane enabling/disabling into helper functions and move
the calls to happen as the first thing during .crtc_disable, and the
last thing during .crtc_enable.
Those are the two clear points where we are sure that the pipe is
actually running regardless of the encoder type or hardware
generation.
v2: Made by Paulo:
Remove the code touching everything but the Haswell functions. We
need this change on Haswell right now since it fixes a FIFO underrun
that we get on pipe A while we enable pipe B (see the workaround
notes on the Haswell mode set sequence documentation). We can bring
back the code to gens 2-7 later, once they're tested.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The global integrated clock source bit resides in DPLL B on VLV, but we
were treating it as a per-pipe resource. It needs to be set whenever
any PLL is active, so pull setting the bit out of vlv_update_pll and
into vlv_enable_pll. Also add a vlv_disable_pll to prevent disabling it
when pipe B shuts down.
I'm guessing on the references here, I expect this to bite any config
where multiple displays are active or displays are moved from pipe to
pipe.
v2: re-add bits in vlv_update_pll to keep from confusing the state checker
v3: use enum pipe checks (Daniel)
set CRI clock source early (Ville)
consistently set CRI clock source everywhere (Ville)
v4: drop unnecessary setting of bit in vlv enable pll (Ville)
References: https://bugs.freedesktop.org/show_bug.cgi?id=67245
References: https://bugs.freedesktop.org/show_bug.cgi?id=69693
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: s/1/PIPE_B/]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For some reason, every single time I try to run module_reload
something tries to read the connector sysfs files. This happens
after we destroy the encoders and before we destroy the connectors, so
when the sysfs read triggers the connector detect() function,
intel_conector->encoder points to memory that was already freed.
The bad backtrace is just:
[<ffffffff8163ca9a>] dump_stack+0x54/0x74
[<ffffffffa00c2c8e>] intel_dp_detect+0x1e/0x4b0 [i915]
[<ffffffffa001913d>] status_show+0x3d/0x80 [drm]
[<ffffffff813d5340>] dev_attr_show+0x20/0x60
[<ffffffff81221f50>] ? sysfs_read_file+0x80/0x1b0
[<ffffffff81221f79>] sysfs_read_file+0xa9/0x1b0
[<ffffffff811aaf1e>] vfs_read+0x9e/0x170
[<ffffffff811aba4c>] SyS_read+0x4c/0xa0
[<ffffffff8164e392>] system_call_fastpath+0x16/0x1b
But if you add tons of memory checking debug options to your Kernel
you'll also see:
- general protection fault: 0000
- BUG kmalloc-4096 (Tainted: G D W ): Poison overwritten
- INFO: Allocated in intel_ddi_init+0x65/0x270 [i915]
- INFO: Freed in intel_dp_encoder_destroy+0x69/0xb0 [i915]
Among a bunch of other error messages.
So this commit just destroys the sysfs files before both the encoder
and connectors are freed.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Neither the DP spec nor the compliance test spec state or imply that we
should write the DP_TRAINING_PATTERN_SET at every voltage swing and
pre-emphasis change. Indeed we probably shouldn't. So don't.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=49402
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Smoke-tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Per DP1.2 spec.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Todd Previte <tprevite@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It indicates a probable BIOS bug, but it appears to be harmless, and
there's nothing the user can do about it anyway, so reduce to a debug
msg. I've filed a bug with the BIOS folks about it anyway, so hopefully
they'll fix whatever GT SB read they were doing when the GT was off.
References: https://bugs.freedesktop.org/show_bug.cgi?id=69396
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The kernel shouldn't accept invalid modes, just say No.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that the coding of stereo layout has changed from a bit field to an
enum, we need remove that check.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This allows us to use fewer bits in the mode structure, leaving room for
future work while allowing more stereo layouts types than we could have
ever dreamt of.
I also exposed the previously private DRM_MODE_FLAG_3D_MASK to set in
stone that we are using 5 bits for the stereo layout enum, reserving 32
values.
Even with that reservation, we gain 3 bits from the previous encoding.
The code adding the mandatory stereo modes needeed to be adapted as it was
relying or being able to or stereo layouts together.
Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need to use the clock control reg to figure out how many CZ clks are in
30ns and use that as the basis for our RC6 residency calculations.
v2: use ULL everywhere for consistency (Chris)
factor out bias for clarity (Chris)
References: https://bugs.freedesktop.org/show_bug.cgi?id=69692
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
And add some reg defines while we're at it. Since the units of the RC6
residency counter are actually in CZ clocks, we want to just use the
high bits or we'll overflow too frequently.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
vlv_find_best_dpll() has an open coded DIV_ROUND_CLOSEST(). Replace it
with the real thing.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Use 'continue' to get rid of one indent level in vlv_find_best_dpll()
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CDCLK is used to generate the gmbus clock. This is normally done by
BIOS. Program the value if the BIOS-less system doesn't do it.
v2: Move this to intel_i2c_reset to allow reprogram the gmbus frequency
during resume. (Daniel)
v3: Change GMBUS_FREQ to GMBUSFREQ_VLV, and use VLV_DISPLAY_BASE.
(Ville).
Remove cdclk_ratio[] table, and calculate the cdclk ratio instead.
(Ville).
Change the shift then mask for reg read, to mask first, then shift.
(Ville).
Remove the gmbus frequency calculation = cdclk/1.01. Based on BIOS
programming, gmbus frequency = cdclk frequency. (Ville)
Add get_disp_clk_div, which can use to get cdclk/czclk divide.
v4: Fix the mmio_offset base for CZCLK_CDCLK_FREQ_RATIO, gmbus_freq
calculation, and duplicate check for gmbus_freq. (Ville)
In VLV, the spec is wrong about 4Mhz reference frequency for GMBUS. It
should be 1Mhz.
Signed-off-by: Chon Ming Lee <chon.ming.lee@intel.com>
[danvet: Add the comment Ville suggested. Also appease checkpatch a
bit.]
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Still digging up the actual VBT info for this, but wanted to get this
out there for testing, or in case others are also bugged by this.
This can happen if you boot with an external display connected. In that
case, the attached eDP backlight modulation frequency may not be
programmed, so we need to use something (in this case the value my BIOS
normally programs with just the internal display enabled).
v2: fix masking and magic value in read_blc_pwm_ctl (Jani)
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67732
Tested-by: shui yangwei <yangweix.shui@intel.com> (v1)
Reviewed-by: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we ask to adjust the crtc timings for stereo modes, the correct
pipe_src_w and pipe_src_h can be found in crtc_vdisplay and crtc_hdisplay.
v2: Add comment about why pipe_src_w/h need to be set afert
set_crtcinfo() (Daniel Vetter)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When scanning out big stereo buffers that are actually bigger that their
natural 2D counterpart, we need to blow up the crtc timings as well.
Not that this is only done for frame packing as this is the only stereo
mode currently exposed needing this kind of ajdustements.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
struct drm_mode_display now has a separate crtc_ version of the clock to
be used when we're talking about the timings given to the harwadre (was
far as the mode is concerned).
This commit is really the result of a git grep adjusted_mode.*clock and
replacing those by adjusted_mode.crtc_clock. No functional change.
v2: Rebased on drm-intel-queued-next
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want to dump the parameters given to the hardware, so let's use
crtc_clock here.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Some stereo modes, like frame packing, need a larger CRTC viewport than
the "natural" underlying 2D mode and thus drm_crtc_check_viewport()
needs to query the adjusted mode to use the correct h/vdisplay.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Both setcrtc and page_flip are checking that the framebuffer is big
enough for the defined crtc viewport (x, y, hdisplay, vdisplay). Factor
that code out in a single function.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When using the frame packing and a single big framebuffer, some hardware
requires that we do everything like if we were scanning out the big
buffer itself. Let's instrument drm_mode_set_crtcinfo() to be able to do
this adjustement if the driver is asking for it.
v2: Use crtc_vtotal and multiply the clock by 2 instead of
reconstructing it (Ville Syrjälä)
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just like the various timings, make it possible to have a clock field
what we can tweak before giving it to hardware.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This field is unused. Garbage collect it.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>