Commit 'aee3bac0a3a8 ("drm/i915/psr: Tie PSR2 support to Y
coordinate requirement")' got merged to drm-intel-next-queued
but the variable was defined commit 'c5fe47327b06 ("drm: Add PSR
version 3 macro") who was merged through drm-misc.
So backmerging to get drm-intel-next-queued compiling back again.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJauCZfAAoJEHm+PkMAQRiGWGUH/2rhdQDkoJpYWnjaQkolECG8
MxpGE7nmIIHxQcbSDdHTGJ8IhVm6Z5wZ7ym/PwCDTT043Y1y341sJrIwL2/nTG6d
HVidk8hFvgN6QzlzVAHT3ZZMII/V9Zt+VV5SUYLGnPAVuJNHo/6uzWlTU5g+NTFo
IquFDdQUaGBlkKqby+NoAFnkV1UAIkW0g22cfvPnlO5GMer0gusGyVNvVp7TNj3C
sqj4Hvt3RMDLMNe9RZ2pFTiOD096n8FWpYftZneUTxFImhRV3Jg5MaaYZm9SI3HW
tXrv/LChT/F1mi5Pkx6tkT5Hr8WvcrwDMJ4It1kom10RqWAgjxIR3CMm448ileY=
=YKUG
-----END PGP SIGNATURE-----
Backmerge tag 'v4.16-rc7' into drm-next
Linux 4.16-rc7
This was requested by Daniel, and things were getting
a bit hard to reconcile, most of the conflicts were
trivial though.
Our shadow context content is from guest but with masked control reg like
CTX_CONTEXT_CONTROL, we need to make sure all settings from guest would be set
when this context is on hw, this trys to force mask enable bits for all to
ensure every bits setting would be effective on hw.
One regression found related to once inhibit bit is set, gpu engine are working
on inhibit state until MI_LOAD_REG_IMM command or context image clear inhibit
bit with mask bit set to 1, and val bit set to 0. In gvt-g currently workload
has the highest priority, so gvt-g workload could trigger preempt context
easily, preempt context set inhibit bit, then gvt-g workload is scheduled in,
but gvt-g workload shadow context image usually doesn't set inhibit mask bit,
so gpu is still in inhibit state when gvt workload is running. This caused gpu
hang.
Suggested-by: Zhang, Xiong <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Zhang, Xiong <xiong.y.zhang@intel.com>
The PDPs of a shadow page will only be valid after a vGPU mm is pinned.
So the PDPs in the shadow context should be updated then.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As different OSes might handling GVT PPGTT creation/destroy notification
differently during a vGPU reset. A better approach is invalidating all
vGPU PPGTT mm objects during vGPU reset.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Trivial fix to spelling mistake in gvt_err error message text.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reduntant message prints when:
- linux guest creating.
- dma-buf win10 guest boot.
- xonotic stress testing in linux guest.
Add below registers to default MMIO handler:
0xd00, RPM_CONFIG0
0xd40, RC6_LOCATION
0x65010, HSW_AUD_MISC_CTRL
0x6671c,
0x700a0, CUR_FBC_CTL
0x7239c,
v2:
- Should init i915_reg_t using uint32_t instead of _MMIO macro.
(compiling errors)
- Use defined offset in i915_reg.h
(zhenyu)
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Once the ring buffer is copied to ring_scan_buffer and scanned,
the shadow batch buffer start address is only updated into
ring_scan_buffer, not the real ring address allocated through
intel_ring_begin in later copy_workload_to_ring_buffer.
This patch is only to set the right shadow batch buffer address
from Ring buffer, not include the shadow_wa_ctx.
v2:
- refine some comments. (Zhenyu)
v3:
- fix typo in title. (Zhenyu)
v4:
- remove the unnecessary comments. (Zhenyu)
- add comments in bb_start_cmd_va update. (Zhenyu)
Fixes: 0a53bc07f0 ("drm/i915/gvt: Separate cmd scan from request allocation")
Cc: stable@vger.kernel.org # v4.15
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Yulei Zhang <yulei.zhang@intel.com>
Signed-off-by: fred gao <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
When populating shadow ctx from guest, we should handle oa related
registers in hw ctx, so that they will not be overlapped by guest oa
configs. This patch made it possible to capture oa data from host for
both host and guests.
Signed-off-by: Min He <min.he@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
If user continuously create vgpu, boot guest, shoutdown guest and destroy
vgpu from remote, the following calltrace exists in dmesg sometimes:
[ 6412.954721] RPM wakelock ref not held during HW access
[ 6412.954795] WARNING: CPU: 7 PID: 11941 at
linux/drivers/gpu/drm/i915/intel_drv.h:1800
intel_uncore_forcewake_get.part.7+0x96/0xa0 [i915]
[ 6412.954915] Call Trace:
[ 6412.954951] intel_uncore_forcewake_get+0x18/0x20 [i915]
[ 6412.954989] intel_gvt_switch_mmio+0x8e/0x770 [i915]
[ 6412.954996] ? __slab_free+0x14d/0x2c0
[ 6412.955001] ? __slab_free+0x14d/0x2c0
[ 6412.955006] ? __slab_free+0x14d/0x2c0
[ 6412.955041] intel_vgpu_stop_schedule+0x92/0xd0 [i915]
[ 6412.955073] intel_gvt_deactivate_vgpu+0x48/0x60 [i915]
[ 6412.955078] __intel_vgpu_release+0x55/0x260 [kvmgt]
when this happens, gvt_switch_mmio is called at vgpu destroy, host i915 is
idle and doesn't hold RPM wakelock, igd is in powersave mode, but
gvt_switch_mmio require igd power on to access register, so
intel_runtime_pm_get should be added to make sure igd power on before
gvt_switch_mmio.
v2: Move runtime_pm_get/put into gvt_switch_mmio.(Zhenyu)
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
In XenGT, ioreq copy is used to trap mmio write and ppgtt write. Both
of them are memory write, ioreq handler couldn't distinguish them. So
ioreq handler probe the ppgtt write handler, if it is succuess, this
ioreq is ppgtt write, otherwise it is mmio write.
So ppgtt write handler should return an error at the failure of finding
page track, it is fatal to implement ioreq handler in XenGT.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
page_track_handler take lock at the beginning, the lock should be released
at the failure of finding page track. Otherwise deadlock will happen.
Fixes: e502a2af4c ("drm/i915/gvt: Provide generic page_track infrastructure for write-protected page")
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add a new debugfs entry kvmgt_nr_cache_entries under vgpu which shows
the number of entry in dma cache.
$ cat /sys/kernel/debug/gvt/vgpu1/kvmgt_nr_cache_entries
10101
v3: fix compiling error for some configuration. (Xiong Zhang <xiong.y.zhang@intel.com>)
v2: keep debugfs layout flat.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The implementation of current kvmgt implicitly setup dma mapping at MPT
API gfn_to_mfn. First this design against the API's original purpose.
Second, there is no unmap hit in this design. The result is that the
dma mapping keep growing larger and larger. For mutl-vm case, they will
consume IOMMU IOVA low 4GB address space quickly and so tons of rbtree
entries crated in the IOMMU IOVA allocator. Finally, single IOVA
allocation can take as long as ~70ms. Such latency is intolerable.
To address both above issues, this patch introduced two new MPT API:
o dma_map_guest_page - setup dma map for guest page
o dma_unmap_guest_page - cancel dma map for guest page
The kvmgt implements these 2 API. And to reduce dma setup overhead for
duplicated pages (eg. scratch pages), two caches are used: one is for
mapping gfn to struct gvt_dma, another is for mapping dma addr to
struct gvt_dma.
With these 2 new API, the gtt now is able to cancel dma mapping when page
table is invalidated. The dma mapping is not in a gradual increase now.
v2: follow the old logic for VFIO_IOMMU_NOTIFY_DMA_UNMAP at this point.
Cc: Hang Yuan <hang.yuan@intel.com>
Cc: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fix below error with minor code refactor.
CHECK drivers/gpu/drm/i915//gvt/handlers.c
drivers/gpu/drm/i915//gvt/handlers.c:203 sanitize_fence_mmio_access() error: 'vgpu' dereferencing possible ERR_PTR()
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fix check error at
CHECK drivers/gpu/drm/i915//gvt/kvmgt.c
drivers/gpu/drm/i915//gvt/kvmgt.c:455 intel_vgpu_create() error: we previously assumed 'vgpu' could be null (see line 454)
For failed vgpu create, just show error return in failure message.
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
There is one issue relates to Coarse Power Gating(CPG) on KBL NUC in GVT-g,
vgpu can't get the correct default context by updating the registers before
inhibit context submission. It always get back the hardware default value
unless the inhibit context submission happened before the 1st time
forcewake put. With this wrong default context, vgpu will run with
incorrect state and meet unknown issues.
The solution is initialize these mmios by adding lri command in ring buffer
of the inhibit context, then gpu hardware has no chance to go down RC6 when
lri commands are right being executed, and then vgpu can get correct
default context for further use.
v3:
- fix code fault, use 'for' to loop through mmio render list(Zhenyu)
v4:
- save the count of engine mmio need to be restored for inhibit context and
refine some comments. (Kevin)
v5:
- code rebase
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
No functional change, just for easy to use.
v4:
- refine comment (Kevin)
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
No functional change. This defination will also be used in future patchesi.
v4:
- refine patch description (Kevin)
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We don't know how many page tables will be shadowed. It varies
considerably corresponding to guest load. Radix tree is a better
choice for us. Since Page Frame Number is used as key so most of
the bits are common.
Here is some performance data (duration in us) of looking up a
element:
Before: (aka. ppgtt_find_shadow_page)
0.308 0.292 0.246 0.432 0.143 ... 0.311 0.225 0.382 0.199 0.325
After: (aka. intel_vgpu_find_spt_by_mfn)
0.106 0.106 0.107 0.106 0.105 0.107 ... 0.107 0.109 0.105 0.108
This time I didn't get the early data of hash table. The data is
measured when desktop is shown.
As last change, the overall benchmark almost is not changed, but
we get better scalability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch provide generic page_track infrastructure for write-protected
guest page. The old page_track logic gets rewrote and now stays in a new
standalone page_track.c. This page track infrastructure can be both used
by vGUC and GTT shadowing.
The important change is that it uses radix tree instead of hash table.
We don't have a predictable number of pages that will be tracked.
Here is some performance data (duration in us) of looking up a element:
Before: (aka. intel_vgpu_find_tracked_page)
0.091 0.089 0.090 ... 0.093 0.091 0.087 ... 0.292 0.285 0.292 0.291
After: (aka. intel_vgpu_find_page_track)
0.104 0.105 0.100 0.102 0.102 0.100 ... 0.101 0.101 0.105 0.105
The hash table has good performance at beginning, but turns bad with
more pages being tracked even no 3D applications are running. As
expected, radix tree has stable duration and very quick.
The overall benchmark (tested with Heaven Benchmark) marginally improved
since this is not the bottleneck. What we benefit more from this change
is scalability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Don't extend page_track to mpt layer. Keep MPT simple and clean.
Meanwhile remove gtt.n_tracked_guest_page which doesn't make much
sense.
v2: clean up gtt.n_tracked_guest_page.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The kvmgt's implementation of mpt api {set,unset}_wp_page is not real
write-protection - the data get written before invoke this two api.
As discussed, change the mpt api to match the real behavior.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The target structure of some functions is struct intel_vgpu_ppgtt_spt and
their names are xxx_shadow_page. It should be xxx_shadow_page_table. Let's
use short name 'spt' instead to reduce the length. As well as the hash
table name.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This is a another big one and the GVT shadow page management code is
heavily refined.
The new code only use struct intel_vgpu_ppgtt_spt to represent a vgpu
shadow page table - w/ or wo/ a guest page associated with. A pure shadow
page (no guest page associated) will be used to shadow splited 2M huge
gtt. In this case, the spt.guest_page.gfn should be a zero.
To search a existed shadow page table, we have two new interfaces:
- intel_vgpu_find_spt_by_gfn(), find a spt by guest gfn. It must not
be a pure spt.
- intel_vgpu_find_spt_by_mfn, Find the spt using shadow page mfn in
shadowed PTE.
The oos_page management is remained as what is was.
v2: Split some changes into small standalone patches.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Make the shadow PTE population code clear. Later we will add huge gtt
support based on this.
v2:
- rebase to latest code.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
GTT entry has similar format with the CPU PTE. We'd prefer named macro
instead of hardcode.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out these two interfaces so we can kill some duplicated code in
scheduler.c.
v2:
- rename to intel_vgpu_{get,put}_ppgtt_mm
- refine handle_g2v_notification
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Accurate names help to avoid confusing so improve readability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This add a new macro gvt_vdbg_mm() to print more verbose logs for
gtt shadowing. The added verbose logs are very useful for debugging.
gvt_vdbg_mm() only comes into effect if VERBOSE_DEBUG is defined by
the developer.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Less code and use existed helper ggtt_set_host_entry.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Separate ggtt and ppgtt since they are different. A little more code but
straightforward.
And move these helpers to gtt.c since that is the only client.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
If we manage an object with a reference count, then its life cycle
must flow the reference count operations. Meanwhile, change the
operation functions to generic name *put* and *get*.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This is a big one and the GVT shadow graphic memory management code is
heavily refined. The new code is more straightforward with less code.
The struct intel_vgpu_mm is restructured to be clearly defined, use
accurate names and some of the original fields are removed which are
really redundant.
Now we only manage ppgtt mm object with mm->ppgtt_mm.lru_list. No need
to mix ppgtt and ggtt together, since one vGPU only has one ggtt object.
v4: Don't invoke ppgtt_free_all_shadow_page before intel_vgpu_destroy_all_ppgtt_mm.
v3: Add GVT_RING_CTX_NR_PDPS to avoid confusing about the PDPs.
v2: Split some changes into small standalone patches.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
To pull in the HDCP changes, especially wait_for changes to drm/i915
that Chris wants to build on top of.
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Driver Changes:
- Lift alpha_support protection from Cannonlake (Rodrigo)
* Meaning the driver should mostly work for the hardware we had
at our disposal when testing
* Used to be preliminary_hw_support
- Add missing Cannonlake PCI device ID of 0x5A4C (Rodrigo)
- Cannonlake port register fix (Mahesh)
- Fix Dell Venue 8 Pro black screen after modeset (Hans)
- Fix for always returning zero out-fence from execbuf (Daniele)
- Fix HDMI audio when no no relevant video output is active (Jani)
- Fix memleak of VBT data on driver_unload (Hans)
- Fix for KASAN found locking issue (Maarten)
- RCU barrier consolidation to improve igt/gem_sync/idle (Chris)
- Optimizations to IRQ handlers (Chris)
- vblank tracking improvements (64-bit resolution, PM) (Dhinakaran)
- Pipe select bit corrections (Ville)
- Reduce runtime computed device_info fields (Chris)
- Tune down some WARN_ONs to GEM_BUG_ON now that CI has good coverage (Chris)
- A bunch of kerneldoc warning fixes (Chris)
* tag 'drm-intel-next-2018-02-21' of git://anongit.freedesktop.org/drm/drm-intel: (113 commits)
drm/i915: Update DRIVER_DATE to 20180221
drm/i915/fbc: Use PLANE_HAS_FENCE to determine if the plane is fenced
drm/i915/fbdev: Use the PLANE_HAS_FENCE flags from the time of pinning
drm/i915: Move the policy for placement of the GGTT vma into the caller
drm/i915: Also check view->type for a normal GGTT view
drm/i915: Drop WaDoubleCursorLP3Latency:ivb
drm/i915: Set the primary plane pipe select bits on gen4
drm/i915: Don't set cursor pipe select bits on g4x+
drm/i915: Assert that we don't overflow frontbuffer tracking bits
drm/i915: Track number of pending freed objects
drm/i915/: Initialise trans_min for skl_compute_transition_wm()
drm/i915: Clear the in-use marker on execbuf failure
drm/i915: Prune gen8_gt_irq_handler
drm/i915: Track GT interrupt handling using the master iir
drm/i915: Remove WARN_ONCE for failing to pm_runtime_if_in_use
drm: intel_dpio_phy: fix kernel-doc comments at nested struct
drm/i915: Release connector iterator on a digital port conflict.
drm/i915/execlists: Remove too early assert
drm/i915: Assert that we always complete a submission to guc/execlists
drm: move read_domains and write_domain into i915
...
To pull in the HDCP changes, especially wait_for changes to drm/i915
that Chris wants to build on top of.
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
We want to de-emphasize the link between the request (dependency,
execution and fence tracking) from GEM and so rename the struct from
drm_i915_gem_request to i915_request. That is we may implement the GEM
user interface on top of requests, but they are an abstraction for
tracking execution rather than an implementation detail of GEM. (Since
they are not tied to HW, we keep the i915 prefix as opposed to intel.)
In short, the spatch:
@@
@@
- struct drm_i915_gem_request
+ struct i915_request
A corollary to contracting the type name, we also harmonise on using
'rq' shorthand for local variables where space if of the essence and
repetition makes 'request' unwieldy. For globals and struct members,
'request' is still much preferred for its clarity.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180221095636.6649-1-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
i915 is the only driver using those fields in the drm_gem_object
structure, so they only waste memory for all other drivers.
Move the fields into drm_i915_gem_object instead and patch the i915 code
with the following sed commands:
sed -i "s/obj->base.read_domains/obj->read_domains/g" drivers/gpu/drm/i915/*.c drivers/gpu/drm/i915/*/*.c
sed -i "s/obj->base.write_domain/obj->write_domain/g" drivers/gpu/drm/i915/*.c drivers/gpu/drm/i915/*/*.c
Change is only compile tested.
v2: move fields around as suggested by Chris.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180216124338.9087-1-christian.koenig@amd.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Fix one typo of render_mmio trace, exchange the mmio value of old and new.
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
GGTT is in BAR0 with 8 bytes aligned. With a qemu patch (commit:
38d49e8c1523d97d2191190d3f7b4ce7a0ab5aa3), VFIO can use 8-byte reads/
writes to access it.
This patch is to support the 8-byte GGTT reads/writes.
Ideally, we would like to support 8-byte reads/writes for the total BAR0.
But it needs more work for handling 8-byte MMIO reads/writes.
This patch can fix the issue caused by partial updating GGTT entry, during
guest booting up.
v3:
- Use intel_vgpu_get_bar_gpa() stead. (Zhenyu)
- Include all the GGTT checking logic in gtt_entry(). (Zhenyu)
v2:
- Limit to GGTT entry. (Zhenyu)
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>