The WaDisableLSQCROPERFforOCL workaround has the side effect of
disabling an L3SQ optimization that has huge performance implications
and is unlikely to be necessary for the correct functioning of usual
graphic workloads. Userspace is free to re-enable the workaround on
demand, and is generally in a better position to determine whether the
workaround is necessary than the DRM is (e.g. only during the
execution of compute kernels that rely on both L3 fences and HDC R/W
requests).
The same workaround seems to apply to BDW (at least to production
stepping G1) and SKL as well (the internal workaround database claims
that it does for all steppings, while the BSpec workaround table only
mentions pre-production steppings), but the DRM doesn't do anything
beyond whitelisting the L3SQCREG4 register so userspace can enable it
when it sees fit. Do the same on KBL platforms.
Improves performance of the GFXBench4 gl_manhattan31 benchmark by 60%,
and gl_4 (AKA car chase) by 14% on a KBL GT2 running Mesa master --
This is followed by a regression of 35% and 10% respectively for the
same benchmarks and platform caused by my recent patch series
switching userspace to use the dataport constant cache instead of the
sampler to implement uniform pull constant loads, which caused us to
hit more heavily the L3 cache (and on platforms other than KBL had the
opposite effect of improving performance of the same two benchmarks).
The overall effect on KBL of this change combined with the recent
userspace change is respectively 4.6% and 2.6%. SynMark2 OglShMapPcf
was affected by the constant cache changes (though it improved as it
did on other platforms rather than regressing), but is not
significantly affected by this patch (with statistical significance of
5% and sample size 20).
v2: Drop some more code to avoid unused variable warning.
Fixes: 738fa1b312 ("drm/i915/kbl: Add WaDisableLSQCROPERFforOCL")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99256
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: beignet@lists.freedesktop.org
Cc: <stable@vger.kernel.org> # v4.7+
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
[Removed double Fixes tag]
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1484217894-20505-1-git-send-email-mika.kuoppala@intel.com
(cherry picked from commit 8726f2faa3)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
GT reset and FLR share some operations and they are both implemented in
our new function intel_gvt_reset_vgpu_locked(). This patch rewrite the
gt reset handler using this new function.
Besides, this new implementation fixed the old issue in GT reset. The
old implementation reset GGTT entries which is illegal. We only clear
GGTT entries at PCI level reset.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Our function tests found several issues related to reusing vGPU
instance. They are qemu reboot failure, guest tdr after reboot, host
hang when reboot guest. All these issues are caused by dirty status
inherited from last VM.
This patch fix all these issues by resetting a virtual GPU before VM
use it. The reset logical is put into a low level function
_intel_gvt_reset_vgpu(), which supports Device Model Level Reset, Full
GT Reset and Per-Engine Reset.
vGPU Device Model Level Reset (DMLR) simulates the PCI reset to reset
the whole vGPU to default state as when it is created, including GTT,
execlist, scratch pages, cfg space, mmio space, pvinfo page, scheduler
and fence registers. The ultimate goal of vGPU DMLR is that reuse a
vGPU instance by different virtual machines. When we reassign a vGPU
to a virtual machine we must issue such reset first.
Full GT Reset and Per-Engine GT Reset are soft reset flow for GPU engines
(Render, Blitter, Video, Video Enhancement). It is defined by GPU Spec.
Unlike the FLR, GT reset only reset particular resource of a vGPU per
the reset request. Guest driver can issue a GT reset by programming
the virtual GDRST register to reset specific virtual GPU engine or all
engines.
Since vGPU DMLR and GT reset can share some code so we implement both
these two into one single function intel_gvt_reset_vgpu_locked(). The
parameter dmlr is to identify if we will do FLR or GT reset. The
parameter engine_mask is to specific the engines that need to be
resetted. If value ALL_ENGINES is given for engine_mask, it means
the caller requests a full gt reset that we will reset all virtual
GPU engines.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Jike Song <jike.song@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a new function intel_vgpu_reset_mmio() to
reset vGPU MMIO space (virtual registers of the vGPU). The default
values are loaded as firmware during gvt inititiation.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Move the mmio space inititation function setup_vgpu_mmio()
and cleanup function clean_vgpu_mmio() in vgpu.c to dedicated
source file mmio.c, and rename them as intel_vgpu_init_mmio()
and intel_vgpu_clean_mmio() respectively.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a new function intel_vgpu_reset_cfg_space()
to reset vGPU configuration space. This function will unmap gttmmio
and aperture if they are mapped before. Then entire cfg space will
be restored to default values.
Currently we only do such reset when vGPU is not owned by any VM
so we simply restore entire cfg space to default value, not following
the PCIe FLR spec that some fields should remain unchanged.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Move the configuration space inititation function setup_vgpu_cfg_space()
in vgpu.c to dedicated source file cfg_space.c, and rename the function
as intel_vgpu_init_cfg_space().
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a new function intel_vgpu_reset_gtt() to reset
the all GTT related status, including GGTT, PPGTT, scratch page. This
function can free all shadowed PPGTT, clear all GGTT entry, and clear
scratch page to all zero. After this, we can ensure no gtt related
information can be leakaged from one VM to anothor one when assign
vgpu instance across different VMs (not simultaneously).
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introudces a new function intel_vgpu_reset_resource() to
reset allocated vgpu resources by intel_vgpu_alloc_resource(). So far
we only need clear the fence registers. The function _clear_vgpu_fence()
will reset both virtual and physical fence registers to 0.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In gvt, almost all memory allocations are in sleepable contexts. It's
fault-prone to use GFP_ATOMIC everywhere. Replace it with GFP_KERNEL
wherever possible.
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The vgpu_create() routine we called returns meaningful errors to indicate
failures, so we'd better to pass it to our caller, the mdev framework,
whereby the sysfs is able to tell userspace what happened.
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
According to the spec, ACPI OpRegion must be placed at a physical address
below 4G. That is, for a vGPU it must be associated with a GPA below 4G,
but on host side, it doesn't matter where the backing pages actually are.
So when allocating pages from host, the GFP_DMA32 flag is unnecessary.
Also the allocation is from a sleepable context, so GFP_ATOMIC is also
unnecessary.
This patch also removes INTEL_GVT_OPREGION_PORDER and use get_order()
instead.
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Once idr_alloc gets called data is allocated within the idr list, if
any error occurs afterwards, we should undo that by idr_remove on the
error path.
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The vgpu->running_workload_num is used to determine whether a vgpu has
any workload running or not. So we should make sure the workload is
really done before we dec running_workload_num. Function
complete_current_workload is not the right place to do it, since this
function is still processing the workload. This patch move the dec op
afterward.
v2: move dec op before wake_up(&scheduler->workload_complete_wq) (Min He)
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Min He <min.he@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In the function workload_thread(), we invoke complete_current_workload()
to cleanup the just processed workload (workload will be freed there).
So we cannot access workload->req after that. This patch move
complete_current_workload() afterward.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Remove duplicated definition for resource size in aperture_gm.c
which are already defined in gvt.h. Need only one to take effect.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Previous high mem size initialized for vGPU type was too small which caused
failure for some VMs. This trys to take minimal value of 384MB for each VM and
enlarge default high mem size to make guest driver happy.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In function intel_vgpu_emulate_mmio_read, the untracked mmio register is
dumped through kernel log, but the register value is not correct. This
patch fixes this issue.
V2: fix the fromat warning from checkpatch.pl.
Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The readq and writeq are already offered by drm_os_linux.h. So we can
use them directly whithout dectecting their presence. This patch removed
the duplicated code.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Return ealier for a invalid access, else it would false set
tlb flag for RCS.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The current prototype of new_mmio_info() uses void* for parameters read
and write, which are functions with precise calling conventions
(argument types and return type). Write down these conventions in
new_mmio_info() definition.
This has been reported by the following warnings when clang is used to
build the kernel:
drivers/gpu/drm/i915/gvt/handlers.c:124:21: error: pointer type
mismatch ('void *' and 'int (*)(struct intel_vgpu *, unsigned int,
void *, unsigned int)') [-Werror,-Wpointer-type-mismatch]
info->read = read ? read : intel_vgpu_default_mmio_read;
^ ~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/gpu/drm/i915/gvt/handlers.c:125:23: error: pointer type
mismatch ('void *' and 'int (*)(struct intel_vgpu *, unsigned int,
void *, unsigned int)') [-Werror,-Wpointer-type-mismatch]
info->write = write ? write : intel_vgpu_default_mmio_write;
^ ~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This allows the compiler to detect that sbi_ctl_mmio_write() returns a
"bool" value instead of an expected "int" one. Fix this.
Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Pull VFIO fixes from Alex Williamson:
- Add mtty sample driver properly into build system (Alex Williamson)
- Restore type1 mapping performance after mdev (Alex Williamson)
- Fix mdev device race (Alex Williamson)
- Cleanups to the mdev ABI used by vendor drivers (Alex Williamson)
- Build fix for old compilers (Arnd Bergmann)
- Fix sample driver error path (Dan Carpenter)
- Handle pci_iomap() error (Arvind Yadav)
- Fix mdev ioctl return type (Paul Gortmaker)
* tag 'vfio-v4.10-rc3' of git://github.com/awilliam/linux-vfio:
vfio-mdev: fix non-standard ioctl return val causing i386 build fail
vfio-pci: Handle error from pci_iomap
vfio-mdev: fix some error codes in the sample code
vfio-pci: use 32-bit comparisons for register address for gcc-4.5
vfio-mdev: Make mdev_device private and abstract interfaces
vfio-mdev: Make mdev_parent private
vfio-mdev: de-polute the namespace, rename parent_device & parent_ops
vfio-mdev: Fix remove race
vfio/type1: Restore mapping performance with mdev support
vfio-mdev: Fix mtty sample driver building
Pull swiotlb fixes from Konrad Rzeszutek Wilk:
"This has one fix to make i915 work when using Xen SWIOTLB, and a
feature from Geert to aid in debugging of devices that can't do DMA
outside the 32-bit address space.
The feature from Geert is on top of v4.10 merge window commit
(specifically you pulling my previous branch), as his changes were
dependent on the Documentation/ movement patches.
I figured it would just easier than me trying than to cherry-pick the
Documentation patches to satisfy git.
The patches have been soaking since 12/20, albeit I updated the last
patch due to linux-next catching an compiler error and adding an
Tested-and-Reported-by tag"
* 'stable/for-linus-4.10' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb:
swiotlb: Export swiotlb_max_segment to users
swiotlb: Add swiotlb=noforce debug option
swiotlb: Convert swiotlb_force from int to enum
x86, swiotlb: Simplify pci_swiotlb_detect_override()
So they can figure out what is the optimal number of pages
that can be contingously stitched together without fear of
bounce buffer.
We also expose an mechanism for sub-users of SWIOTLB API, such
as Xen-SWIOTLB to set the max segment value. And lastly
if swiotlb=force is set (which mandates we bounce buffer everything)
we set max_segment so at least we can bounce buffer one 4K page
instead of a giant 512KB one for which we may not have space.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reported-and-Tested-by: Juergen Gross <jgross@suse.com>
Apparently some VLV BIOSen like to leave the VDD force bit enabled
even for power seqeuncers that aren't properly hooked up to any
port. That will result in a imbalance in the AUX power domain
refcount when we stat to use said power sequencer as edp_panel_vdd_on()
will not grab the power domain reference if it sees that the VDD is
already on.
To fix this let's make sure we turn off the VDD force bit when we
initialize the power sequencer registers. That is, unless it's
being done from the init path since there we are actually
initializing the registers for the current power sequencer and
we don't want to turn VDD off needlessly as that would require
waiting for the power cycle delay before we turn it back on.
This fixes the following kind of warnings:
WARNING: CPU: 0 PID: 123 at ../drivers/gpu/drm/i915/intel_runtime_pm.c:1455 intel_display_power_put+0x13a/0x170 [i915]()
WARN_ON(!power_domains->domain_use_count[domain])
...
v2: Fix typos in comment (David)
Cc: stable@vger.kernel.org
Cc: Matwey V. Kornilov <matwey.kornilov@gmail.com>
Tested-by: Matwey V. Kornilov <matwey.kornilov@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98695
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161220165117.24801-1-ville.syrjala@linux.intel.com
Reviewed-by: David Weinehall <david.weinehall@linux.intel.com>
(cherry picked from commit 5d5ab2d26f)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>