Merge 5.7-rc5 into tty-next

We need the tty fixes in here too.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2020-05-11 08:55:10 +02:00
commit 1cc18584e5
461 changed files with 4390 additions and 2041 deletions

View File

@ -182,12 +182,15 @@ fix_padding
space-efficient. If this option is not present, large padding is
used - that is for compatibility with older kernels.
allow_discards
Allow block discard requests (a.k.a. TRIM) for the integrity device.
Discards are only allowed to devices using internal hash.
The journal mode (D/J), buffer_sectors, journal_watermark, commit_time can
be changed when reloading the target (load an inactive table and swap the
tables with suspend and resume). The other arguments should not be changed
when reloading the target because the layout of disk data depend on them
and the reloaded target would be non-functional.
The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and
allow_discards can be changed when reloading the target (load an inactive
table and swap the tables with suspend and resume). The other arguments
should not be changed when reloading the target because the layout of disk
data depend on them and the reloaded target would be non-functional.
The layout of the formatted block device:

View File

@ -22,9 +22,7 @@ properties:
const: socionext,uniphier-xdmac
reg:
items:
- description: XDMAC base register region (offset and length)
- description: XDMAC extension register region (offset and length)
maxItems: 1
interrupts:
maxItems: 1
@ -49,12 +47,13 @@ required:
- reg
- interrupts
- "#dma-cells"
- dma-channels
examples:
- |
xdmac: dma-controller@5fc10000 {
compatible = "socionext,uniphier-xdmac";
reg = <0x5fc10000 0x1000>, <0x5fc20000 0x800>;
reg = <0x5fc10000 0x5300>;
interrupts = <0 188 4>;
#dma-cells = <2>;
dma-channels = <16>;

View File

@ -61,8 +61,8 @@ The ``ice`` driver reports the following versions
- running
- ICE OS Default Package
- The name of the DDP package that is active in the device. The DDP
package is loaded by the driver during initialization. Each varation
of DDP package shall have a unique name.
package is loaded by the driver during initialization. Each
variation of the DDP package has a unique name.
* - ``fw.app``
- running
- 1.3.1.0

View File

@ -28,3 +28,5 @@ KVM
arm/index
devices/index
running-nested-guests

View File

@ -0,0 +1,276 @@
==============================
Running nested guests with KVM
==============================
A nested guest is the ability to run a guest inside another guest (it
can be KVM-based or a different hypervisor). The straightforward
example is a KVM guest that in turn runs on a KVM guest (the rest of
this document is built on this example)::
.----------------. .----------------.
| | | |
| L2 | | L2 |
| (Nested Guest) | | (Nested Guest) |
| | | |
|----------------'--'----------------|
| |
| L1 (Guest Hypervisor) |
| KVM (/dev/kvm) |
| |
.------------------------------------------------------.
| L0 (Host Hypervisor) |
| KVM (/dev/kvm) |
|------------------------------------------------------|
| Hardware (with virtualization extensions) |
'------------------------------------------------------'
Terminology:
- L0 level-0; the bare metal host, running KVM
- L1 level-1 guest; a VM running on L0; also called the "guest
hypervisor", as it itself is capable of running KVM.
- L2 level-2 guest; a VM running on L1, this is the "nested guest"
.. note:: The above diagram is modelled after the x86 architecture;
s390x, ppc64 and other architectures are likely to have
a different design for nesting.
For example, s390x always has an LPAR (LogicalPARtition)
hypervisor running on bare metal, adding another layer and
resulting in at least four levels in a nested setup — L0 (bare
metal, running the LPAR hypervisor), L1 (host hypervisor), L2
(guest hypervisor), L3 (nested guest).
This document will stick with the three-level terminology (L0,
L1, and L2) for all architectures; and will largely focus on
x86.
Use Cases
---------
There are several scenarios where nested KVM can be useful, to name a
few:
- As a developer, you want to test your software on different operating
systems (OSes). Instead of renting multiple VMs from a Cloud
Provider, using nested KVM lets you rent a large enough "guest
hypervisor" (level-1 guest). This in turn allows you to create
multiple nested guests (level-2 guests), running different OSes, on
which you can develop and test your software.
- Live migration of "guest hypervisors" and their nested guests, for
load balancing, disaster recovery, etc.
- VM image creation tools (e.g. ``virt-install``, etc) often run
their own VM, and users expect these to work inside a VM.
- Some OSes use virtualization internally for security (e.g. to let
applications run safely in isolation).
Enabling "nested" (x86)
-----------------------
From Linux kernel v4.19 onwards, the ``nested`` KVM parameter is enabled
by default for Intel and AMD. (Though your Linux distribution might
override this default.)
In case you are running a Linux kernel older than v4.19, to enable
nesting, set the ``nested`` KVM module parameter to ``Y`` or ``1``. To
persist this setting across reboots, you can add it in a config file, as
shown below:
1. On the bare metal host (L0), list the kernel modules and ensure that
the KVM modules::
$ lsmod | grep -i kvm
kvm_intel 133627 0
kvm 435079 1 kvm_intel
2. Show information for ``kvm_intel`` module::
$ modinfo kvm_intel | grep -i nested
parm: nested:bool
3. For the nested KVM configuration to persist across reboots, place the
below in ``/etc/modprobed/kvm_intel.conf`` (create the file if it
doesn't exist)::
$ cat /etc/modprobe.d/kvm_intel.conf
options kvm-intel nested=y
4. Unload and re-load the KVM Intel module::
$ sudo rmmod kvm-intel
$ sudo modprobe kvm-intel
5. Verify if the ``nested`` parameter for KVM is enabled::
$ cat /sys/module/kvm_intel/parameters/nested
Y
For AMD hosts, the process is the same as above, except that the module
name is ``kvm-amd``.
Additional nested-related kernel parameters (x86)
-------------------------------------------------
If your hardware is sufficiently advanced (Intel Haswell processor or
higher, which has newer hardware virt extensions), the following
additional features will also be enabled by default: "Shadow VMCS
(Virtual Machine Control Structure)", APIC Virtualization on your bare
metal host (L0). Parameters for Intel hosts::
$ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs
Y
$ cat /sys/module/kvm_intel/parameters/enable_apicv
Y
$ cat /sys/module/kvm_intel/parameters/ept
Y
.. note:: If you suspect your L2 (i.e. nested guest) is running slower,
ensure the above are enabled (particularly
``enable_shadow_vmcs`` and ``ept``).
Starting a nested guest (x86)
-----------------------------
Once your bare metal host (L0) is configured for nesting, you should be
able to start an L1 guest with::
$ qemu-kvm -cpu host [...]
The above will pass through the host CPU's capabilities as-is to the
gues); or for better live migration compatibility, use a named CPU
model supported by QEMU. e.g.::
$ qemu-kvm -cpu Haswell-noTSX-IBRS,vmx=on
then the guest hypervisor will subsequently be capable of running a
nested guest with accelerated KVM.
Enabling "nested" (s390x)
-------------------------
1. On the host hypervisor (L0), enable the ``nested`` parameter on
s390x::
$ rmmod kvm
$ modprobe kvm nested=1
.. note:: On s390x, the kernel parameter ``hpage`` is mutually exclusive
with the ``nested`` paramter — i.e. to be able to enable
``nested``, the ``hpage`` parameter *must* be disabled.
2. The guest hypervisor (L1) must be provided with the ``sie`` CPU
feature — with QEMU, this can be done by using "host passthrough"
(via the command-line ``-cpu host``).
3. Now the KVM module can be loaded in the L1 (guest hypervisor)::
$ modprobe kvm
Live migration with nested KVM
------------------------------
Migrating an L1 guest, with a *live* nested guest in it, to another
bare metal host, works as of Linux kernel 5.3 and QEMU 4.2.0 for
Intel x86 systems, and even on older versions for s390x.
On AMD systems, once an L1 guest has started an L2 guest, the L1 guest
should no longer be migrated or saved (refer to QEMU documentation on
"savevm"/"loadvm") until the L2 guest shuts down. Attempting to migrate
or save-and-load an L1 guest while an L2 guest is running will result in
undefined behavior. You might see a ``kernel BUG!`` entry in ``dmesg``, a
kernel 'oops', or an outright kernel panic. Such a migrated or loaded L1
guest can no longer be considered stable or secure, and must be restarted.
Migrating an L1 guest merely configured to support nesting, while not
actually running L2 guests, is expected to function normally even on AMD
systems but may fail once guests are started.
Migrating an L2 guest is always expected to succeed, so all the following
scenarios should work even on AMD systems:
- Migrating a nested guest (L2) to another L1 guest on the *same* bare
metal host.
- Migrating a nested guest (L2) to another L1 guest on a *different*
bare metal host.
- Migrating a nested guest (L2) to a bare metal host.
Reporting bugs from nested setups
-----------------------------------
Debugging "nested" problems can involve sifting through log files across
L0, L1 and L2; this can result in tedious back-n-forth between the bug
reporter and the bug fixer.
- Mention that you are in a "nested" setup. If you are running any kind
of "nesting" at all, say so. Unfortunately, this needs to be called
out because when reporting bugs, people tend to forget to even
*mention* that they're using nested virtualization.
- Ensure you are actually running KVM on KVM. Sometimes people do not
have KVM enabled for their guest hypervisor (L1), which results in
them running with pure emulation or what QEMU calls it as "TCG", but
they think they're running nested KVM. Thus confusing "nested Virt"
(which could also mean, QEMU on KVM) with "nested KVM" (KVM on KVM).
Information to collect (generic)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is not an exhaustive list, but a very good starting point:
- Kernel, libvirt, and QEMU version from L0
- Kernel, libvirt and QEMU version from L1
- QEMU command-line of L1 -- when using libvirt, you'll find it here:
``/var/log/libvirt/qemu/instance.log``
- QEMU command-line of L2 -- as above, when using libvirt, get the
complete libvirt-generated QEMU command-line
- ``cat /sys/cpuinfo`` from L0
- ``cat /sys/cpuinfo`` from L1
- ``lscpu`` from L0
- ``lscpu`` from L1
- Full ``dmesg`` output from L0
- Full ``dmesg`` output from L1
x86-specific info to collect
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Both the below commands, ``x86info`` and ``dmidecode``, should be
available on most Linux distributions with the same name:
- Output of: ``x86info -a`` from L0
- Output of: ``x86info -a`` from L1
- Output of: ``dmidecode`` from L0
- Output of: ``dmidecode`` from L1
s390x-specific info to collect
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Along with the earlier mentioned generic details, the below is
also recommended:
- ``/proc/sysinfo`` from L1; this will also include the info from L0

View File

@ -3657,7 +3657,7 @@ L: linux-btrfs@vger.kernel.org
S: Maintained
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git
F: Documentation/filesystems/btrfs.rst
F: fs/btrfs/
F: include/linux/btrfs*
@ -3936,11 +3936,9 @@ F: arch/powerpc/platforms/cell/
CEPH COMMON CODE (LIBCEPH)
M: Ilya Dryomov <idryomov@gmail.com>
M: Jeff Layton <jlayton@kernel.org>
M: Sage Weil <sage@redhat.com>
L: ceph-devel@vger.kernel.org
S: Supported
W: http://ceph.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
T: git git://github.com/ceph/ceph-client.git
F: include/linux/ceph/
F: include/linux/crush/
@ -3948,12 +3946,10 @@ F: net/ceph/
CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)
M: Jeff Layton <jlayton@kernel.org>
M: Sage Weil <sage@redhat.com>
M: Ilya Dryomov <idryomov@gmail.com>
L: ceph-devel@vger.kernel.org
S: Supported
W: http://ceph.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
T: git git://github.com/ceph/ceph-client.git
F: Documentation/filesystems/ceph.rst
F: fs/ceph/
@ -5935,9 +5931,9 @@ F: lib/dynamic_debug.c
DYNAMIC INTERRUPT MODERATION
M: Tal Gilboa <talgi@mellanox.com>
S: Maintained
F: Documentation/networking/net_dim.rst
F: include/linux/dim.h
F: lib/dim/
F: Documentation/networking/net_dim.rst
DZ DECSTATION DZ11 SERIAL DRIVER
M: "Maciej W. Rozycki" <macro@linux-mips.org>
@ -7119,9 +7115,10 @@ F: include/uapi/asm-generic/
GENERIC PHY FRAMEWORK
M: Kishon Vijay Abraham I <kishon@ti.com>
M: Vinod Koul <vkoul@kernel.org>
L: linux-kernel@vger.kernel.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy.git
F: Documentation/devicetree/bindings/phy/
F: drivers/phy/
F: include/linux/phy/
@ -7746,11 +7743,6 @@ L: platform-driver-x86@vger.kernel.org
S: Orphan
F: drivers/platform/x86/tc1100-wmi.c
HP100: Driver for HP 10/100 Mbit/s Voice Grade Network Adapter Series
M: Jaroslav Kysela <perex@perex.cz>
S: Obsolete
F: drivers/staging/hp/hp100.*
HPET: High Precision Event Timers driver
M: Clemens Ladisch <clemens@ladisch.de>
S: Maintained
@ -14102,12 +14094,10 @@ F: drivers/media/radio/radio-tea5777.c
RADOS BLOCK DEVICE (RBD)
M: Ilya Dryomov <idryomov@gmail.com>
M: Sage Weil <sage@redhat.com>
R: Dongsheng Yang <dongsheng.yang@easystack.cn>
L: ceph-devel@vger.kernel.org
S: Supported
W: http://ceph.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
T: git git://github.com/ceph/ceph-client.git
F: Documentation/ABI/testing/sysfs-bus-rbd
F: drivers/block/rbd.c

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 7
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc5
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*
@ -729,10 +729,6 @@ else ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
KBUILD_CFLAGS += -Os
endif
ifdef CONFIG_CC_DISABLE_WARN_MAYBE_UNINITIALIZED
KBUILD_CFLAGS += -Wno-maybe-uninitialized
endif
# Tell gcc to never replace conditional load with a non-conditional one
KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
KBUILD_CFLAGS += $(call cc-option,-fno-allow-store-data-races)
@ -881,6 +877,17 @@ KBUILD_CFLAGS += -Wno-pointer-sign
# disable stringop warnings in gcc 8+
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
# We'll want to enable this eventually, but it's not going away for 5.7 at least
KBUILD_CFLAGS += $(call cc-disable-warning, zero-length-bounds)
KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-overflow)
# Another good warning that we'll want to enable eventually
KBUILD_CFLAGS += $(call cc-disable-warning, restrict)
# Enabled with W=2, disabled by default as noisy
KBUILD_CFLAGS += $(call cc-disable-warning, maybe-uninitialized)
# disable invalid "can't wrap" optimizations for signed / pointers
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)

View File

@ -91,9 +91,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
return;
}
kernel_neon_begin();
chacha_doneon(state, dst, src, bytes, nrounds);
kernel_neon_end();
do {
unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
kernel_neon_begin();
chacha_doneon(state, dst, src, todo, nrounds);
kernel_neon_end();
bytes -= todo;
src += todo;
dst += todo;
} while (bytes);
}
EXPORT_SYMBOL(chacha_crypt_arch);

View File

@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
return crypto_nhpoly1305_update(desc, src, srclen);
do {
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
kernel_neon_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);

View File

@ -160,13 +160,20 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
if (static_branch_likely(&have_neon) && do_neon) {
kernel_neon_begin();
poly1305_blocks_neon(&dctx->h, src, len, 1);
kernel_neon_end();
do {
unsigned int todo = min_t(unsigned int, len, SZ_4K);
kernel_neon_begin();
poly1305_blocks_neon(&dctx->h, src, todo, 1);
kernel_neon_end();
len -= todo;
src += todo;
} while (len);
} else {
poly1305_blocks_arm(&dctx->h, src, len, 1);
src += len;
}
src += len;
nbytes %= POLY1305_BLOCK_SIZE;
}

View File

@ -165,8 +165,13 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
preempt_enable();
#endif
if (!ret)
*oval = oldval;
/*
* Store unconditionally. If ret != 0 the extra store is the least
* of the worries but GCC cannot figure out that __futex_atomic_op()
* is either setting ret to -EFAULT or storing the old value in
* oldval which results in a uninitialized warning at the call site.
*/
*oval = oldval;
return ret;
}

View File

@ -87,9 +87,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
!crypto_simd_usable())
return chacha_crypt_generic(state, dst, src, bytes, nrounds);
kernel_neon_begin();
chacha_doneon(state, dst, src, bytes, nrounds);
kernel_neon_end();
do {
unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
kernel_neon_begin();
chacha_doneon(state, dst, src, todo, nrounds);
kernel_neon_end();
bytes -= todo;
src += todo;
dst += todo;
} while (bytes);
}
EXPORT_SYMBOL(chacha_crypt_arch);

View File

@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
return crypto_nhpoly1305_update(desc, src, srclen);
do {
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
kernel_neon_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);

View File

@ -143,13 +143,20 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
if (static_branch_likely(&have_neon) && crypto_simd_usable()) {
kernel_neon_begin();
poly1305_blocks_neon(&dctx->h, src, len, 1);
kernel_neon_end();
do {
unsigned int todo = min_t(unsigned int, len, SZ_4K);
kernel_neon_begin();
poly1305_blocks_neon(&dctx->h, src, todo, 1);
kernel_neon_end();
len -= todo;
src += todo;
} while (len);
} else {
poly1305_blocks(&dctx->h, src, len, 1);
src += len;
}
src += len;
nbytes %= POLY1305_BLOCK_SIZE;
}

View File

@ -32,7 +32,7 @@ UBSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y
KCOV_INSTRUMENT := n
CFLAGS_vgettimeofday.o = -O2 -mcmodel=tiny
CFLAGS_vgettimeofday.o = -O2 -mcmodel=tiny -fasynchronous-unwind-tables
ifneq ($(c-gettimeofday-y),)
CFLAGS_vgettimeofday.o += -include $(c-gettimeofday-y)

View File

@ -200,6 +200,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
}
memcpy((u32 *)regs + off, valp, KVM_REG_SIZE(reg->id));
if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) {
int i;
for (i = 0; i < 16; i++)
*vcpu_reg32(vcpu, i) = (u32)*vcpu_reg32(vcpu, i);
}
out:
return err;
}

View File

@ -18,6 +18,7 @@
#define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x)
#define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
#define CPU_SP_EL0_OFFSET (CPU_XREG_OFFSET(30) + 8)
.text
.pushsection .hyp.text, "ax"
@ -47,6 +48,16 @@
ldp x29, lr, [\ctxt, #CPU_XREG_OFFSET(29)]
.endm
.macro save_sp_el0 ctxt, tmp
mrs \tmp, sp_el0
str \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]
.endm
.macro restore_sp_el0 ctxt, tmp
ldr \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]
msr sp_el0, \tmp
.endm
/*
* u64 __guest_enter(struct kvm_vcpu *vcpu,
* struct kvm_cpu_context *host_ctxt);
@ -60,6 +71,9 @@ SYM_FUNC_START(__guest_enter)
// Store the host regs
save_callee_saved_regs x1
// Save the host's sp_el0
save_sp_el0 x1, x2
// Now the host state is stored if we have a pending RAS SError it must
// affect the host. If any asynchronous exception is pending we defer
// the guest entry. The DSB isn't necessary before v8.2 as any SError
@ -83,6 +97,9 @@ alternative_else_nop_endif
// when this feature is enabled for kernel code.
ptrauth_switch_to_guest x29, x0, x1, x2
// Restore the guest's sp_el0
restore_sp_el0 x29, x0
// Restore guest regs x0-x17
ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)]
ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)]
@ -130,6 +147,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// Store the guest regs x18-x29, lr
save_callee_saved_regs x1
// Store the guest's sp_el0
save_sp_el0 x1, x2
get_host_ctxt x2, x3
// Macro ptrauth_switch_to_guest format:
@ -139,6 +159,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// when this feature is enabled for kernel code.
ptrauth_switch_to_host x1, x2, x3, x4, x5
// Restore the hosts's sp_el0
restore_sp_el0 x2, x3
// Now restore the host regs
restore_callee_saved_regs x2

View File

@ -198,7 +198,6 @@ SYM_CODE_END(__hyp_panic)
.macro invalid_vector label, target = __hyp_panic
.align 2
SYM_CODE_START(\label)
\label:
b \target
SYM_CODE_END(\label)
.endm

View File

@ -15,8 +15,9 @@
/*
* Non-VHE: Both host and guest must save everything.
*
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate,
* which are handled as part of the el2 return state) on every switch.
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
* pstate, which are handled as part of the el2 return state) on every
* switch (sp_el0 is being dealt with in the assembly code).
* tpidr_el0 and tpidrro_el0 only need to be switched when going
* to host userspace or a different VCPU. EL1 registers only need to be
* switched when potentially going to run a different VCPU. The latter two
@ -26,12 +27,6 @@
static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
{
ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1);
/*
* The host arm64 Linux uses sp_el0 to point to 'current' and it must
* therefore be saved/restored on every entry/exit to/from the guest.
*/
ctxt->gp_regs.regs.sp = read_sysreg(sp_el0);
}
static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt)
@ -99,12 +94,6 @@ NOKPROBE_SYMBOL(sysreg_save_guest_state_vhe);
static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
{
write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1);
/*
* The host arm64 Linux uses sp_el0 to point to 'current' and it must
* therefore be saved/restored on every entry/exit to/from the guest.
*/
write_sysreg(ctxt->gp_regs.regs.sp, sp_el0);
}
static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)

View File

@ -230,6 +230,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
ptep = (pte_t *)pudp;
} else if (sz == (CONT_PTE_SIZE)) {
pmdp = pmd_alloc(mm, pudp, addr);
if (!pmdp)
return NULL;
WARN_ON(addr & (sz - 1));
/*

View File

@ -521,6 +521,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_IOEVENTFD:
case KVM_CAP_DEVICE_CTRL:
case KVM_CAP_IMMEDIATE_EXIT:
case KVM_CAP_SET_GUEST_DEBUG:
r = 1;
break;
case KVM_CAP_PPC_GUEST_DEBUG_SSTEP:

View File

@ -60,7 +60,7 @@ config RISCV
select ARCH_HAS_GIGANTIC_PAGE
select ARCH_HAS_SET_DIRECT_MAP
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_STRICT_KERNEL_RWX if MMU
select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
select SPARSEMEM_STATIC if 32BIT
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU

View File

@ -51,13 +51,10 @@
#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
/* Interrupt causes (minus the high bit) */
#define IRQ_U_SOFT 0
#define IRQ_S_SOFT 1
#define IRQ_M_SOFT 3
#define IRQ_U_TIMER 4
#define IRQ_S_TIMER 5
#define IRQ_M_TIMER 7
#define IRQ_U_EXT 8
#define IRQ_S_EXT 9
#define IRQ_M_EXT 11

View File

@ -8,6 +8,7 @@
#ifndef _ASM_RISCV_HWCAP_H
#define _ASM_RISCV_HWCAP_H
#include <linux/bits.h>
#include <uapi/asm/hwcap.h>
#ifndef __ASSEMBLY__
@ -22,6 +23,27 @@ enum {
};
extern unsigned long elf_hwcap;
#define RISCV_ISA_EXT_a ('a' - 'a')
#define RISCV_ISA_EXT_c ('c' - 'a')
#define RISCV_ISA_EXT_d ('d' - 'a')
#define RISCV_ISA_EXT_f ('f' - 'a')
#define RISCV_ISA_EXT_h ('h' - 'a')
#define RISCV_ISA_EXT_i ('i' - 'a')
#define RISCV_ISA_EXT_m ('m' - 'a')
#define RISCV_ISA_EXT_s ('s' - 'a')
#define RISCV_ISA_EXT_u ('u' - 'a')
#define RISCV_ISA_EXT_MAX 64
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
#define riscv_isa_extension_mask(ext) BIT_MASK(RISCV_ISA_EXT_##ext)
bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit);
#define riscv_isa_extension_available(isa_bitmap, ext) \
__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext)
#endif
#endif /* _ASM_RISCV_HWCAP_H */

View File

@ -22,14 +22,6 @@ static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif
#ifdef CONFIG_STRICT_KERNEL_RWX
void set_kernel_text_ro(void);
void set_kernel_text_rw(void);
#else
static inline void set_kernel_text_ro(void) { }
static inline void set_kernel_text_rw(void) { }
#endif
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);

View File

@ -15,8 +15,8 @@
const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init;
void *__cpu_up_stack_pointer[NR_CPUS];
void *__cpu_up_task_pointer[NR_CPUS];
void *__cpu_up_stack_pointer[NR_CPUS] __section(.data);
void *__cpu_up_task_pointer[NR_CPUS] __section(.data);
extern const struct cpu_operations cpu_ops_sbi;
extern const struct cpu_operations cpu_ops_spinwait;

View File

@ -6,6 +6,7 @@
* Copyright (C) 2017 SiFive
*/
#include <linux/bitmap.h>
#include <linux/of.h>
#include <asm/processor.h>
#include <asm/hwcap.h>
@ -13,15 +14,57 @@
#include <asm/switch_to.h>
unsigned long elf_hwcap __read_mostly;
/* Host ISA bitmap */
static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly;
#ifdef CONFIG_FPU
bool has_fpu __read_mostly;
#endif
/**
* riscv_isa_extension_base() - Get base extension word
*
* @isa_bitmap: ISA bitmap to use
* Return: base extension word as unsigned long value
*
* NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used.
*/
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap)
{
if (!isa_bitmap)
return riscv_isa[0];
return isa_bitmap[0];
}
EXPORT_SYMBOL_GPL(riscv_isa_extension_base);
/**
* __riscv_isa_extension_available() - Check whether given extension
* is available or not
*
* @isa_bitmap: ISA bitmap to use
* @bit: bit position of the desired extension
* Return: true or false
*
* NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used.
*/
bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit)
{
const unsigned long *bmap = (isa_bitmap) ? isa_bitmap : riscv_isa;
if (bit >= RISCV_ISA_EXT_MAX)
return false;
return test_bit(bit, bmap) ? true : false;
}
EXPORT_SYMBOL_GPL(__riscv_isa_extension_available);
void riscv_fill_hwcap(void)
{
struct device_node *node;
const char *isa;
size_t i;
char print_str[BITS_PER_LONG + 1];
size_t i, j, isa_len;
static unsigned long isa2hwcap[256] = {0};
isa2hwcap['i'] = isa2hwcap['I'] = COMPAT_HWCAP_ISA_I;
@ -33,8 +76,11 @@ void riscv_fill_hwcap(void)
elf_hwcap = 0;
bitmap_zero(riscv_isa, RISCV_ISA_EXT_MAX);
for_each_of_cpu_node(node) {
unsigned long this_hwcap = 0;
unsigned long this_isa = 0;
if (riscv_of_processor_hartid(node) < 0)
continue;
@ -44,8 +90,24 @@ void riscv_fill_hwcap(void)
continue;
}
for (i = 0; i < strlen(isa); ++i)
i = 0;
isa_len = strlen(isa);
#if IS_ENABLED(CONFIG_32BIT)
if (!strncmp(isa, "rv32", 4))
i += 4;
#elif IS_ENABLED(CONFIG_64BIT)
if (!strncmp(isa, "rv64", 4))
i += 4;
#endif
for (; i < isa_len; ++i) {
this_hwcap |= isa2hwcap[(unsigned char)(isa[i])];
/*
* TODO: X, Y and Z extension parsing for Host ISA
* bitmap will be added in-future.
*/
if ('a' <= isa[i] && isa[i] < 'x')
this_isa |= (1UL << (isa[i] - 'a'));
}
/*
* All "okay" hart should have same isa. Set HWCAP based on
@ -56,6 +118,11 @@ void riscv_fill_hwcap(void)
elf_hwcap &= this_hwcap;
else
elf_hwcap = this_hwcap;
if (riscv_isa[0])
riscv_isa[0] &= this_isa;
else
riscv_isa[0] = this_isa;
}
/* We don't support systems with F but without D, so mask those out
@ -65,7 +132,17 @@ void riscv_fill_hwcap(void)
elf_hwcap &= ~COMPAT_HWCAP_ISA_F;
}
pr_info("elf_hwcap is 0x%lx\n", elf_hwcap);
memset(print_str, 0, sizeof(print_str));
for (i = 0, j = 0; i < BITS_PER_LONG; i++)
if (riscv_isa[0] & BIT_MASK(i))
print_str[j++] = (char)('a' + i);
pr_info("riscv: ISA extensions %s\n", print_str);
memset(print_str, 0, sizeof(print_str));
for (i = 0, j = 0; i < BITS_PER_LONG; i++)
if (elf_hwcap & BIT_MASK(i))
print_str[j++] = (char)('a' + i);
pr_info("riscv: ELF capabilities %s\n", print_str);
#ifdef CONFIG_FPU
if (elf_hwcap & (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D))

View File

@ -102,7 +102,7 @@ void sbi_shutdown(void)
{
sbi_ecall(SBI_EXT_0_1_SHUTDOWN, 0, 0, 0, 0, 0, 0, 0);
}
EXPORT_SYMBOL(sbi_set_timer);
EXPORT_SYMBOL(sbi_shutdown);
/**
* sbi_clear_ipi() - Clear any pending IPIs for the calling hart.
@ -113,7 +113,7 @@ void sbi_clear_ipi(void)
{
sbi_ecall(SBI_EXT_0_1_CLEAR_IPI, 0, 0, 0, 0, 0, 0, 0);
}
EXPORT_SYMBOL(sbi_shutdown);
EXPORT_SYMBOL(sbi_clear_ipi);
/**
* sbi_set_timer_v01() - Program the timer for next timer event.
@ -167,6 +167,11 @@ static int __sbi_rfence_v01(int fid, const unsigned long *hart_mask,
return result;
}
static void sbi_set_power_off(void)
{
pm_power_off = sbi_shutdown;
}
#else
static void __sbi_set_timer_v01(uint64_t stime_value)
{
@ -191,6 +196,8 @@ static int __sbi_rfence_v01(int fid, const unsigned long *hart_mask,
return 0;
}
static void sbi_set_power_off(void) {}
#endif /* CONFIG_RISCV_SBI_V01 */
static void __sbi_set_timer_v02(uint64_t stime_value)
@ -540,16 +547,12 @@ static inline long sbi_get_firmware_version(void)
return __sbi_base_ecall(SBI_EXT_BASE_GET_IMP_VERSION);
}
static void sbi_power_off(void)
{
sbi_shutdown();
}
int __init sbi_init(void)
{
int ret;
pm_power_off = sbi_power_off;
sbi_set_power_off();
ret = sbi_get_spec_version();
if (ret > 0)
sbi_spec_version = ret;

View File

@ -10,6 +10,7 @@
#include <linux/cpu.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/profile.h>
#include <linux/smp.h>
#include <linux/sched.h>
@ -63,6 +64,7 @@ void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
for_each_cpu(cpu, in)
cpumask_set_cpu(cpuid_to_hartid_map(cpu), out);
}
EXPORT_SYMBOL_GPL(riscv_cpuid_to_hartid_mask);
bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
{

View File

@ -12,6 +12,8 @@
#include <linux/stacktrace.h>
#include <linux/ftrace.h>
register unsigned long sp_in_global __asm__("sp");
#ifdef CONFIG_FRAME_POINTER
struct stackframe {
@ -19,8 +21,6 @@ struct stackframe {
unsigned long ra;
};
register unsigned long sp_in_global __asm__("sp");
void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
bool (*fn)(unsigned long, void *), void *arg)
{

View File

@ -12,7 +12,7 @@ vdso-syms += getcpu
vdso-syms += flush_icache
# Files to link into the vdso
obj-vdso = $(patsubst %, %.o, $(vdso-syms))
obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o
# Build rules
targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o
@ -33,15 +33,15 @@ $(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold)
# We also create a special relocatable object that should mirror the symbol
# table and layout of the linked DSO. With ld -R we can then refer to
# these symbols in the kernel code rather than hand-coded addresses.
# table and layout of the linked DSO. With ld --just-symbols we can then
# refer to these symbols in the kernel code rather than hand-coded addresses.
SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
-Wl,--build-id -Wl,--hash-style=both
$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE
$(call if_changed,vdsold)
LDFLAGS_vdso-syms.o := -r -R
LDFLAGS_vdso-syms.o := -r --just-symbols
$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE
$(call if_changed,ld)

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
* Here we can supply some information useful to userland.
*/
#include <linux/elfnote.h>
#include <linux/version.h>
ELFNOTE_START(Linux, 0, "a")
.long LINUX_VERSION_CODE
ELFNOTE_END

View File

@ -150,7 +150,8 @@ void __init setup_bootmem(void)
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
set_max_mapnr(PFN_DOWN(mem_size));
max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
max_pfn = PFN_DOWN(memblock_end_of_DRAM());
max_low_pfn = max_pfn;
#ifdef CONFIG_BLK_DEV_INITRD
setup_initrd();
@ -501,22 +502,6 @@ static inline void setup_vm_final(void)
#endif /* CONFIG_MMU */
#ifdef CONFIG_STRICT_KERNEL_RWX
void set_kernel_text_rw(void)
{
unsigned long text_start = (unsigned long)_text;
unsigned long text_end = (unsigned long)_etext;
set_memory_rw(text_start, (text_end - text_start) >> PAGE_SHIFT);
}
void set_kernel_text_ro(void)
{
unsigned long text_start = (unsigned long)_text;
unsigned long text_end = (unsigned long)_etext;
set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT);
}
void mark_rodata_ro(void)
{
unsigned long text_start = (unsigned long)_text;

View File

@ -545,6 +545,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_S390_AIS:
case KVM_CAP_S390_AIS_MIGRATION:
case KVM_CAP_S390_VCPU_RESETS:
case KVM_CAP_SET_GUEST_DEBUG:
r = 1;
break;
case KVM_CAP_S390_HPAGE_1M:

View File

@ -626,10 +626,12 @@ static int handle_pqap(struct kvm_vcpu *vcpu)
* available for the guest are AQIC and TAPQ with the t bit set
* since we do not set IC.3 (FIII) we currently will only intercept
* the AQIC function code.
* Note: running nested under z/VM can result in intercepts for other
* function codes, e.g. PQAP(QCI). We do not support this and bail out.
*/
reg0 = vcpu->run->s.regs.gprs[0];
fc = (reg0 >> 24) & 0xff;
if (WARN_ON_ONCE(fc != 0x03))
if (fc != 0x03)
return -EOPNOTSUPP;
/* PQAP instruction is allowed for guest kernel only */

View File

@ -64,10 +64,13 @@ mm_segment_t enable_sacf_uaccess(void)
{
mm_segment_t old_fs;
unsigned long asce, cr;
unsigned long flags;
old_fs = current->thread.mm_segment;
if (old_fs & 1)
return old_fs;
/* protect against a concurrent page table upgrade */
local_irq_save(flags);
current->thread.mm_segment |= 1;
asce = S390_lowcore.kernel_asce;
if (likely(old_fs == USER_DS)) {
@ -83,6 +86,7 @@ mm_segment_t enable_sacf_uaccess(void)
__ctl_load(asce, 7, 7);
set_cpu_flag(CIF_ASCE_SECONDARY);
}
local_irq_restore(flags);
return old_fs;
}
EXPORT_SYMBOL(enable_sacf_uaccess);

View File

@ -70,8 +70,20 @@ static void __crst_table_upgrade(void *arg)
{
struct mm_struct *mm = arg;
if (current->active_mm == mm)
set_user_asce(mm);
/* we must change all active ASCEs to avoid the creation of new TLBs */
if (current->active_mm == mm) {
S390_lowcore.user_asce = mm->context.asce;
if (current->thread.mm_segment == USER_DS) {
__ctl_load(S390_lowcore.user_asce, 1, 1);
/* Mark user-ASCE present in CR1 */
clear_cpu_flag(CIF_ASCE_PRIMARY);
}
if (current->thread.mm_segment == USER_DS_SACF) {
__ctl_load(S390_lowcore.user_asce, 7, 7);
/* enable_sacf_uaccess does all or nothing */
WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY));
}
}
__tlb_flush_local();
}

View File

@ -32,16 +32,16 @@ void blake2s_compress_arch(struct blake2s_state *state,
const u32 inc)
{
/* SIMD disables preemption, so relax after processing each page. */
BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8);
BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);
if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {
blake2s_compress_generic(state, block, nblocks, inc);
return;
}
for (;;) {
do {
const size_t blocks = min_t(size_t, nblocks,
PAGE_SIZE / BLAKE2S_BLOCK_SIZE);
SZ_4K / BLAKE2S_BLOCK_SIZE);
kernel_fpu_begin();
if (IS_ENABLED(CONFIG_AS_AVX512) &&
@ -52,10 +52,8 @@ void blake2s_compress_arch(struct blake2s_state *state,
kernel_fpu_end();
nblocks -= blocks;
if (!nblocks)
break;
block += blocks * BLAKE2S_BLOCK_SIZE;
}
} while (nblocks);
}
EXPORT_SYMBOL(blake2s_compress_arch);

View File

@ -153,9 +153,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
bytes <= CHACHA_BLOCK_SIZE)
return chacha_crypt_generic(state, dst, src, bytes, nrounds);
kernel_fpu_begin();
chacha_dosimd(state, dst, src, bytes, nrounds);
kernel_fpu_end();
do {
unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
kernel_fpu_begin();
chacha_dosimd(state, dst, src, todo, nrounds);
kernel_fpu_end();
bytes -= todo;
src += todo;
dst += todo;
} while (bytes);
}
EXPORT_SYMBOL(chacha_crypt_arch);

View File

@ -29,7 +29,7 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
return crypto_nhpoly1305_update(desc, src, srclen);
do {
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
kernel_fpu_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);

View File

@ -29,7 +29,7 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
return crypto_nhpoly1305_update(desc, src, srclen);
do {
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
kernel_fpu_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);

View File

@ -91,8 +91,8 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
struct poly1305_arch_internal *state = ctx;
/* SIMD disables preemption, so relax after processing each page. */
BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||
PAGE_SIZE % POLY1305_BLOCK_SIZE);
BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
SZ_4K % POLY1305_BLOCK_SIZE);
if (!static_branch_likely(&poly1305_use_avx) ||
(len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||
@ -102,8 +102,8 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
return;
}
for (;;) {
const size_t bytes = min_t(size_t, len, PAGE_SIZE);
do {
const size_t bytes = min_t(size_t, len, SZ_4K);
kernel_fpu_begin();
if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512))
@ -113,11 +113,10 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
else
poly1305_blocks_avx(ctx, inp, bytes, padbit);
kernel_fpu_end();
len -= bytes;
if (!len)
break;
inp += bytes;
}
} while (len);
}
static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],

View File

@ -98,13 +98,6 @@ For 32-bit we have the following conventions - kernel is built with
#define SIZEOF_PTREGS 21*8
.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0
/*
* Push registers and sanitize registers of values that a
* speculation attack might otherwise want to exploit. The
* lower registers are likely clobbered well before they
* could be put to use in a speculative execution gadget.
* Interleave XOR with PUSH for better uop scheduling:
*/
.if \save_ret
pushq %rsi /* pt_regs->si */
movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */
@ -114,34 +107,43 @@ For 32-bit we have the following conventions - kernel is built with
pushq %rsi /* pt_regs->si */
.endif
pushq \rdx /* pt_regs->dx */
xorl %edx, %edx /* nospec dx */
pushq %rcx /* pt_regs->cx */
xorl %ecx, %ecx /* nospec cx */
pushq \rax /* pt_regs->ax */
pushq %r8 /* pt_regs->r8 */
xorl %r8d, %r8d /* nospec r8 */
pushq %r9 /* pt_regs->r9 */
xorl %r9d, %r9d /* nospec r9 */
pushq %r10 /* pt_regs->r10 */
xorl %r10d, %r10d /* nospec r10 */
pushq %r11 /* pt_regs->r11 */
xorl %r11d, %r11d /* nospec r11*/
pushq %rbx /* pt_regs->rbx */
xorl %ebx, %ebx /* nospec rbx*/
pushq %rbp /* pt_regs->rbp */
xorl %ebp, %ebp /* nospec rbp*/
pushq %r12 /* pt_regs->r12 */
xorl %r12d, %r12d /* nospec r12*/
pushq %r13 /* pt_regs->r13 */
xorl %r13d, %r13d /* nospec r13*/
pushq %r14 /* pt_regs->r14 */
xorl %r14d, %r14d /* nospec r14*/
pushq %r15 /* pt_regs->r15 */
xorl %r15d, %r15d /* nospec r15*/
UNWIND_HINT_REGS
.if \save_ret
pushq %rsi /* return address on top of stack */
.endif
/*
* Sanitize registers of values that a speculation attack might
* otherwise want to exploit. The lower registers are likely clobbered
* well before they could be put to use in a speculative execution
* gadget.
*/
xorl %edx, %edx /* nospec dx */
xorl %ecx, %ecx /* nospec cx */
xorl %r8d, %r8d /* nospec r8 */
xorl %r9d, %r9d /* nospec r9 */
xorl %r10d, %r10d /* nospec r10 */
xorl %r11d, %r11d /* nospec r11 */
xorl %ebx, %ebx /* nospec rbx */
xorl %ebp, %ebp /* nospec rbp */
xorl %r12d, %r12d /* nospec r12 */
xorl %r13d, %r13d /* nospec r13 */
xorl %r14d, %r14d /* nospec r14 */
xorl %r15d, %r15d /* nospec r15 */
.endm
.macro POP_REGS pop_rdi=1 skip_r11rcx=0

View File

@ -249,7 +249,6 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
*/
syscall_return_via_sysret:
/* rcx and r11 are already restored (see code above) */
UNWIND_HINT_EMPTY
POP_REGS pop_rdi=0 skip_r11rcx=1
/*
@ -258,6 +257,7 @@ syscall_return_via_sysret:
*/
movq %rsp, %rdi
movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
UNWIND_HINT_EMPTY
pushq RSP-RDI(%rdi) /* RSP */
pushq (%rdi) /* RDI */
@ -279,8 +279,7 @@ SYM_CODE_END(entry_SYSCALL_64)
* %rdi: prev task
* %rsi: next task
*/
SYM_CODE_START(__switch_to_asm)
UNWIND_HINT_FUNC
SYM_FUNC_START(__switch_to_asm)
/*
* Save callee-saved registers
* This must match the order in inactive_task_frame
@ -321,7 +320,7 @@ SYM_CODE_START(__switch_to_asm)
popq %rbp
jmp __switch_to
SYM_CODE_END(__switch_to_asm)
SYM_FUNC_END(__switch_to_asm)
/*
* A newly forked process directly context switches into this address.
@ -512,7 +511,7 @@ SYM_CODE_END(spurious_entries_start)
* +----------------------------------------------------+
*/
SYM_CODE_START(interrupt_entry)
UNWIND_HINT_FUNC
UNWIND_HINT_IRET_REGS offset=16
ASM_CLAC
cld
@ -544,9 +543,9 @@ SYM_CODE_START(interrupt_entry)
pushq 5*8(%rdi) /* regs->eflags */
pushq 4*8(%rdi) /* regs->cs */
pushq 3*8(%rdi) /* regs->ip */
UNWIND_HINT_IRET_REGS
pushq 2*8(%rdi) /* regs->orig_ax */
pushq 8(%rdi) /* return address */
UNWIND_HINT_FUNC
movq (%rdi), %rdi
jmp 2f
@ -637,6 +636,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
*/
movq %rsp, %rdi
movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
UNWIND_HINT_EMPTY
/* Copy the IRET frame to the trampoline stack. */
pushq 6*8(%rdi) /* SS */
@ -1739,7 +1739,7 @@ SYM_CODE_START(rewind_stack_do_exit)
movq PER_CPU_VAR(cpu_current_top_of_stack), %rax
leaq -PTREGS_SIZE(%rax), %rsp
UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE
UNWIND_HINT_REGS
call do_exit
SYM_CODE_END(rewind_stack_do_exit)

View File

@ -73,7 +73,8 @@ static int hv_cpu_init(unsigned int cpu)
struct page *pg;
input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
pg = alloc_page(GFP_KERNEL);
/* hv_cpu_init() can be called with IRQs disabled from hv_resume() */
pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL);
if (unlikely(!pg))
return -ENOMEM;
*input_arg = page_address(pg);
@ -254,6 +255,7 @@ static int __init hv_pci_init(void)
static int hv_suspend(void)
{
union hv_x64_msr_hypercall_contents hypercall_msr;
int ret;
/*
* Reset the hypercall page as it is going to be invalidated
@ -270,12 +272,17 @@ static int hv_suspend(void)
hypercall_msr.enable = 0;
wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
return 0;
ret = hv_cpu_die(0);
return ret;
}
static void hv_resume(void)
{
union hv_x64_msr_hypercall_contents hypercall_msr;
int ret;
ret = hv_cpu_init(0);
WARN_ON(ret);
/* Re-enable the hypercall page */
rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
@ -288,6 +295,7 @@ static void hv_resume(void)
hv_hypercall_pg_saved = NULL;
}
/* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */
static struct syscore_ops hv_syscore_ops = {
.suspend = hv_suspend,
.resume = hv_resume,

View File

@ -61,11 +61,12 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
{
/*
* Compare the symbol name with the system call name. Skip the
* "__x64_sys", "__ia32_sys" or simple "sys" prefix.
* "__x64_sys", "__ia32_sys", "__do_sys" or simple "sys" prefix.
*/
return !strcmp(sym + 3, name + 3) ||
(!strncmp(sym, "__x64_", 6) && !strcmp(sym + 9, name + 3)) ||
(!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3));
(!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3)) ||
(!strncmp(sym, "__do_sys", 8) && !strcmp(sym + 8, name + 3));
}
#ifndef COMPILE_OFFSETS

View File

@ -1663,8 +1663,8 @@ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
{
/* We can only post Fixed and LowPrio IRQs */
return (irq->delivery_mode == dest_Fixed ||
irq->delivery_mode == dest_LowestPrio);
return (irq->delivery_mode == APIC_DM_FIXED ||
irq->delivery_mode == APIC_DM_LOWEST);
}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)

View File

@ -35,6 +35,8 @@ typedef int (*hyperv_fill_flush_list_func)(
rdmsrl(HV_X64_MSR_SINT0 + int_num, val)
#define hv_set_synint_state(int_num, val) \
wrmsrl(HV_X64_MSR_SINT0 + int_num, val)
#define hv_recommend_using_aeoi() \
(!(ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED))
#define hv_get_crash_ctl(val) \
rdmsrl(HV_X64_MSR_CRASH_CTL, val)

View File

@ -19,7 +19,7 @@ struct unwind_state {
#if defined(CONFIG_UNWINDER_ORC)
bool signal, full_regs;
unsigned long sp, bp, ip;
struct pt_regs *regs;
struct pt_regs *regs, *prev_regs;
#elif defined(CONFIG_UNWINDER_FRAME_POINTER)
bool got_irq;
unsigned long *bp, *orig_sp, ip;

View File

@ -352,8 +352,6 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
* According to Intel, MFENCE can do the serialization here.
*/
asm volatile("mfence" : : : "memory");
printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
return;
}
@ -546,7 +544,7 @@ static struct clock_event_device lapic_clockevent = {
};
static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
static u32 hsx_deadline_rev(void)
static __init u32 hsx_deadline_rev(void)
{
switch (boot_cpu_data.x86_stepping) {
case 0x02: return 0x3a; /* EP */
@ -556,7 +554,7 @@ static u32 hsx_deadline_rev(void)
return ~0U;
}
static u32 bdx_deadline_rev(void)
static __init u32 bdx_deadline_rev(void)
{
switch (boot_cpu_data.x86_stepping) {
case 0x02: return 0x00000011;
@ -568,7 +566,7 @@ static u32 bdx_deadline_rev(void)
return ~0U;
}
static u32 skx_deadline_rev(void)
static __init u32 skx_deadline_rev(void)
{
switch (boot_cpu_data.x86_stepping) {
case 0x03: return 0x01000136;
@ -581,7 +579,7 @@ static u32 skx_deadline_rev(void)
return ~0U;
}
static const struct x86_cpu_id deadline_match[] = {
static const struct x86_cpu_id deadline_match[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL( HASWELL_X, &hsx_deadline_rev),
X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_X, 0x0b000020),
X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_D, &bdx_deadline_rev),
@ -603,18 +601,19 @@ static const struct x86_cpu_id deadline_match[] = {
{},
};
static void apic_check_deadline_errata(void)
static __init bool apic_validate_deadline_timer(void)
{
const struct x86_cpu_id *m;
u32 rev;
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER) ||
boot_cpu_has(X86_FEATURE_HYPERVISOR))
return;
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
return false;
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return true;
m = x86_match_cpu(deadline_match);
if (!m)
return;
return true;
/*
* Function pointers will have the MSB set due to address layout,
@ -626,11 +625,12 @@ static void apic_check_deadline_errata(void)
rev = (u32)m->driver_data;
if (boot_cpu_data.microcode >= rev)
return;
return true;
setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; "
"please update microcode to version: 0x%x (or later)\n", rev);
return false;
}
/*
@ -2092,7 +2092,8 @@ void __init init_apic_mappings(void)
{
unsigned int new_apicid;
apic_check_deadline_errata();
if (apic_validate_deadline_timer())
pr_debug("TSC deadline timer available\n");
if (x2apic_mode) {
boot_cpu_physical_apicid = read_apic_id();

View File

@ -183,7 +183,8 @@ recursion_check:
*/
if (visit_mask) {
if (*visit_mask & (1UL << info->type)) {
printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type);
if (task == current)
printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type);
goto unknown;
}
*visit_mask |= 1UL << info->type;

View File

@ -344,6 +344,9 @@ bad_address:
if (IS_ENABLED(CONFIG_X86_32))
goto the_end;
if (state->task != current)
goto the_end;
if (state->regs) {
printk_deferred_once(KERN_WARNING
"WARNING: kernel stack regs at %p in %s:%d has bad 'bp' value %p\n",

View File

@ -8,19 +8,21 @@
#include <asm/orc_lookup.h>
#define orc_warn(fmt, ...) \
printk_deferred_once(KERN_WARNING pr_fmt("WARNING: " fmt), ##__VA_ARGS__)
printk_deferred_once(KERN_WARNING "WARNING: " fmt, ##__VA_ARGS__)
#define orc_warn_current(args...) \
({ \
if (state->task == current) \
orc_warn(args); \
})
extern int __start_orc_unwind_ip[];
extern int __stop_orc_unwind_ip[];
extern struct orc_entry __start_orc_unwind[];
extern struct orc_entry __stop_orc_unwind[];
static DEFINE_MUTEX(sort_mutex);
int *cur_orc_ip_table = __start_orc_unwind_ip;
struct orc_entry *cur_orc_table = __start_orc_unwind;
unsigned int lookup_num_blocks;
bool orc_init;
static bool orc_init __ro_after_init;
static unsigned int lookup_num_blocks __ro_after_init;
static inline unsigned long orc_ip(const int *ip)
{
@ -142,9 +144,6 @@ static struct orc_entry *orc_find(unsigned long ip)
{
static struct orc_entry *orc;
if (!orc_init)
return NULL;
if (ip == 0)
return &null_orc_entry;
@ -189,6 +188,10 @@ static struct orc_entry *orc_find(unsigned long ip)
#ifdef CONFIG_MODULES
static DEFINE_MUTEX(sort_mutex);
static int *cur_orc_ip_table = __start_orc_unwind_ip;
static struct orc_entry *cur_orc_table = __start_orc_unwind;
static void orc_sort_swap(void *_a, void *_b, int size)
{
struct orc_entry *orc_a, *orc_b;
@ -381,9 +384,38 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
return true;
}
/*
* If state->regs is non-NULL, and points to a full pt_regs, just get the reg
* value from state->regs.
*
* Otherwise, if state->regs just points to IRET regs, and the previous frame
* had full regs, it's safe to get the value from the previous regs. This can
* happen when early/late IRQ entry code gets interrupted by an NMI.
*/
static bool get_reg(struct unwind_state *state, unsigned int reg_off,
unsigned long *val)
{
unsigned int reg = reg_off/8;
if (!state->regs)
return false;
if (state->full_regs) {
*val = ((unsigned long *)state->regs)[reg];
return true;
}
if (state->prev_regs) {
*val = ((unsigned long *)state->prev_regs)[reg];
return true;
}
return false;
}
bool unwind_next_frame(struct unwind_state *state)
{
unsigned long ip_p, sp, orig_ip = state->ip, prev_sp = state->sp;
unsigned long ip_p, sp, tmp, orig_ip = state->ip, prev_sp = state->sp;
enum stack_type prev_type = state->stack_info.type;
struct orc_entry *orc;
bool indirect = false;
@ -445,43 +477,39 @@ bool unwind_next_frame(struct unwind_state *state)
break;
case ORC_REG_R10:
if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg R10 at ip %pB\n",
(void *)state->ip);
if (!get_reg(state, offsetof(struct pt_regs, r10), &sp)) {
orc_warn_current("missing R10 value at %pB\n",
(void *)state->ip);
goto err;
}
sp = state->regs->r10;
break;
case ORC_REG_R13:
if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg R13 at ip %pB\n",
(void *)state->ip);
if (!get_reg(state, offsetof(struct pt_regs, r13), &sp)) {
orc_warn_current("missing R13 value at %pB\n",
(void *)state->ip);
goto err;
}
sp = state->regs->r13;
break;
case ORC_REG_DI:
if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg DI at ip %pB\n",
(void *)state->ip);
if (!get_reg(state, offsetof(struct pt_regs, di), &sp)) {
orc_warn_current("missing RDI value at %pB\n",
(void *)state->ip);
goto err;
}
sp = state->regs->di;
break;
case ORC_REG_DX:
if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg DX at ip %pB\n",
(void *)state->ip);
if (!get_reg(state, offsetof(struct pt_regs, dx), &sp)) {
orc_warn_current("missing DX value at %pB\n",
(void *)state->ip);
goto err;
}
sp = state->regs->dx;
break;
default:
orc_warn("unknown SP base reg %d for ip %pB\n",
orc_warn("unknown SP base reg %d at %pB\n",
orc->sp_reg, (void *)state->ip);
goto err;
}
@ -504,44 +532,48 @@ bool unwind_next_frame(struct unwind_state *state)
state->sp = sp;
state->regs = NULL;
state->prev_regs = NULL;
state->signal = false;
break;
case ORC_TYPE_REGS:
if (!deref_stack_regs(state, sp, &state->ip, &state->sp)) {
orc_warn("can't dereference registers at %p for ip %pB\n",
(void *)sp, (void *)orig_ip);
orc_warn_current("can't access registers at %pB\n",
(void *)orig_ip);
goto err;
}
state->regs = (struct pt_regs *)sp;
state->prev_regs = NULL;
state->full_regs = true;
state->signal = true;
break;
case ORC_TYPE_REGS_IRET:
if (!deref_stack_iret_regs(state, sp, &state->ip, &state->sp)) {
orc_warn("can't dereference iret registers at %p for ip %pB\n",
(void *)sp, (void *)orig_ip);
orc_warn_current("can't access iret registers at %pB\n",
(void *)orig_ip);
goto err;
}
if (state->full_regs)
state->prev_regs = state->regs;
state->regs = (void *)sp - IRET_FRAME_OFFSET;
state->full_regs = false;
state->signal = true;
break;
default:
orc_warn("unknown .orc_unwind entry type %d for ip %pB\n",
orc_warn("unknown .orc_unwind entry type %d at %pB\n",
orc->type, (void *)orig_ip);
break;
goto err;
}
/* Find BP: */
switch (orc->bp_reg) {
case ORC_REG_UNDEFINED:
if (state->regs && state->full_regs)
state->bp = state->regs->bp;
if (get_reg(state, offsetof(struct pt_regs, bp), &tmp))
state->bp = tmp;
break;
case ORC_REG_PREV_SP:
@ -564,8 +596,8 @@ bool unwind_next_frame(struct unwind_state *state)
if (state->stack_info.type == prev_type &&
on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) &&
state->sp <= prev_sp) {
orc_warn("stack going in the wrong direction? ip=%pB\n",
(void *)orig_ip);
orc_warn_current("stack going in the wrong direction? at %pB\n",
(void *)orig_ip);
goto err;
}
@ -585,6 +617,9 @@ EXPORT_SYMBOL_GPL(unwind_next_frame);
void __unwind_start(struct unwind_state *state, struct task_struct *task,
struct pt_regs *regs, unsigned long *first_frame)
{
if (!orc_init)
goto done;
memset(state, 0, sizeof(*state));
state->task = task;
@ -651,7 +686,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
/* Otherwise, skip ahead to the user-specified starting frame: */
while (!unwind_done(state) &&
(!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
state->sp <= (unsigned long)first_frame))
state->sp < (unsigned long)first_frame))
unwind_next_frame(state);
return;

View File

@ -225,12 +225,12 @@ static int ioapic_set_irq(struct kvm_ioapic *ioapic, unsigned int irq,
}
/*
* AMD SVM AVIC accelerate EOI write and do not trap,
* in-kernel IOAPIC will not be able to receive the EOI.
* In this case, we do lazy update of the pending EOI when
* trying to set IOAPIC irq.
* AMD SVM AVIC accelerate EOI write iff the interrupt is edge
* triggered, in which case the in-kernel IOAPIC will not be able
* to receive the EOI. In this case, we do a lazy update of the
* pending EOI when trying to set IOAPIC irq.
*/
if (kvm_apicv_activated(ioapic->kvm))
if (edge && kvm_apicv_activated(ioapic->kvm))
ioapic_lazy_update_eoi(ioapic, irq);
/*

View File

@ -345,7 +345,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
return NULL;
/* Pin the user virtual address. */
npinned = get_user_pages_fast(uaddr, npages, FOLL_WRITE, pages);
npinned = get_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
if (npinned != npages) {
pr_err("SEV: Failure locking %lu pages.\n", npages);
goto err;

View File

@ -1752,6 +1752,8 @@ static int db_interception(struct vcpu_svm *svm)
if (svm->vcpu.guest_debug &
(KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) {
kvm_run->exit_reason = KVM_EXIT_DEBUG;
kvm_run->debug.arch.dr6 = svm->vmcb->save.dr6;
kvm_run->debug.arch.dr7 = svm->vmcb->save.dr7;
kvm_run->debug.arch.pc =
svm->vmcb->save.cs.base + svm->vmcb->save.rip;
kvm_run->debug.arch.exception = DB_VECTOR;

View File

@ -5165,7 +5165,7 @@ static int handle_invept(struct kvm_vcpu *vcpu)
*/
break;
default:
BUG_ON(1);
BUG();
break;
}

View File

@ -82,6 +82,9 @@ SYM_FUNC_START(vmx_vmexit)
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
/* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */
or $1, %_ASM_AX
pop %_ASM_AX
.Lvmexit_skip_rsb:
#endif

View File

@ -926,19 +926,6 @@ EXPORT_SYMBOL_GPL(kvm_set_xcr);
__reserved_bits; \
})
static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
{
u64 reserved_bits = __cr4_reserved_bits(cpu_has, c);
if (kvm_cpu_cap_has(X86_FEATURE_LA57))
reserved_bits &= ~X86_CR4_LA57;
if (kvm_cpu_cap_has(X86_FEATURE_UMIP))
reserved_bits &= ~X86_CR4_UMIP;
return reserved_bits;
}
static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
if (cr4 & cr4_reserved_bits)
@ -3385,6 +3372,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_GET_MSR_FEATURES:
case KVM_CAP_MSR_PLATFORM_INFO:
case KVM_CAP_EXCEPTION_PAYLOAD:
case KVM_CAP_SET_GUEST_DEBUG:
r = 1;
break;
case KVM_CAP_SYNC_REGS:
@ -9675,7 +9663,9 @@ int kvm_arch_hardware_setup(void *opaque)
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
supported_xss = 0;
cr4_reserved_bits = kvm_host_cr4_reserved_bits(&boot_cpu_data);
#define __kvm_cpu_cap_has(UNUSED_, f) kvm_cpu_cap_has(f)
cr4_reserved_bits = __cr4_reserved_bits(__kvm_cpu_cap_has, UNUSED_);
#undef __kvm_cpu_cap_has
if (kvm_has_tsc_control) {
/*
@ -9707,7 +9697,8 @@ int kvm_arch_check_processor_compat(void *opaque)
WARN_ON(!irqs_disabled());
if (kvm_host_cr4_reserved_bits(c) != cr4_reserved_bits)
if (__cr4_reserved_bits(cpu_has, c) !=
__cr4_reserved_bits(cpu_has, &boot_cpu_data))
return -EIO;
return ops->check_processor_compatibility();

View File

@ -43,7 +43,8 @@ struct cpa_data {
unsigned long pfn;
unsigned int flags;
unsigned int force_split : 1,
force_static_prot : 1;
force_static_prot : 1,
force_flush_all : 1;
struct page **pages;
};
@ -355,10 +356,10 @@ static void cpa_flush(struct cpa_data *data, int cache)
return;
}
if (cpa->numpages <= tlb_single_page_flush_ceiling)
on_each_cpu(__cpa_flush_tlb, cpa, 1);
else
if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
flush_tlb_all();
else
on_each_cpu(__cpa_flush_tlb, cpa, 1);
if (!cache)
return;
@ -1598,6 +1599,8 @@ static int cpa_process_alias(struct cpa_data *cpa)
alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
alias_cpa.curpage = 0;
cpa->force_flush_all = 1;
ret = __change_page_attr_set_clr(&alias_cpa, 0);
if (ret)
return ret;
@ -1618,6 +1621,7 @@ static int cpa_process_alias(struct cpa_data *cpa)
alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
alias_cpa.curpage = 0;
cpa->force_flush_all = 1;
/*
* The high mapping range is imprecise, so ignore the
* return value.

View File

@ -123,6 +123,7 @@
#include <linux/ioprio.h>
#include <linux/sbitmap.h>
#include <linux/delay.h>
#include <linux/backing-dev.h>
#include "blk.h"
#include "blk-mq.h"
@ -4976,8 +4977,9 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
switch (ioprio_class) {
default:
dev_err(bfqq->bfqd->queue->backing_dev_info->dev,
"bfq: bad prio class %d\n", ioprio_class);
pr_err("bdi %s: bfq: bad prio class %d\n",
bdi_dev_name(bfqq->bfqd->queue->backing_dev_info),
ioprio_class);
/* fall through */
case IOPRIO_CLASS_NONE:
/*

View File

@ -496,7 +496,7 @@ const char *blkg_dev_name(struct blkcg_gq *blkg)
{
/* some drivers (floppy) instantiate a queue w/o disk registered */
if (blkg->q->backing_dev_info->dev)
return dev_name(blkg->q->backing_dev_info->dev);
return bdi_dev_name(blkg->q->backing_dev_info);
return NULL;
}

View File

@ -466,7 +466,7 @@ struct ioc_gq {
*/
atomic64_t vtime;
atomic64_t done_vtime;
atomic64_t abs_vdebt;
u64 abs_vdebt;
u64 last_vtime;
/*
@ -1142,7 +1142,7 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, struct ioc_now *now)
struct iocg_wake_ctx ctx = { .iocg = iocg };
u64 margin_ns = (u64)(ioc->period_us *
WAITQ_TIMER_MARGIN_PCT / 100) * NSEC_PER_USEC;
u64 abs_vdebt, vdebt, vshortage, expires, oexpires;
u64 vdebt, vshortage, expires, oexpires;
s64 vbudget;
u32 hw_inuse;
@ -1152,18 +1152,15 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, struct ioc_now *now)
vbudget = now->vnow - atomic64_read(&iocg->vtime);
/* pay off debt */
abs_vdebt = atomic64_read(&iocg->abs_vdebt);
vdebt = abs_cost_to_cost(abs_vdebt, hw_inuse);
vdebt = abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);
if (vdebt && vbudget > 0) {
u64 delta = min_t(u64, vbudget, vdebt);
u64 abs_delta = min(cost_to_abs_cost(delta, hw_inuse),
abs_vdebt);
iocg->abs_vdebt);
atomic64_add(delta, &iocg->vtime);
atomic64_add(delta, &iocg->done_vtime);
atomic64_sub(abs_delta, &iocg->abs_vdebt);
if (WARN_ON_ONCE(atomic64_read(&iocg->abs_vdebt) < 0))
atomic64_set(&iocg->abs_vdebt, 0);
iocg->abs_vdebt -= abs_delta;
}
/*
@ -1219,12 +1216,18 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now, u64 cost)
u64 expires, oexpires;
u32 hw_inuse;
lockdep_assert_held(&iocg->waitq.lock);
/* debt-adjust vtime */
current_hweight(iocg, NULL, &hw_inuse);
vtime += abs_cost_to_cost(atomic64_read(&iocg->abs_vdebt), hw_inuse);
vtime += abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);
/* clear or maintain depending on the overage */
if (time_before_eq64(vtime, now->vnow)) {
/*
* Clear or maintain depending on the overage. Non-zero vdebt is what
* guarantees that @iocg is online and future iocg_kick_delay() will
* clear use_delay. Don't leave it on when there's no vdebt.
*/
if (!iocg->abs_vdebt || time_before_eq64(vtime, now->vnow)) {
blkcg_clear_delay(blkg);
return false;
}
@ -1258,9 +1261,12 @@ static enum hrtimer_restart iocg_delay_timer_fn(struct hrtimer *timer)
{
struct ioc_gq *iocg = container_of(timer, struct ioc_gq, delay_timer);
struct ioc_now now;
unsigned long flags;
spin_lock_irqsave(&iocg->waitq.lock, flags);
ioc_now(iocg->ioc, &now);
iocg_kick_delay(iocg, &now, 0);
spin_unlock_irqrestore(&iocg->waitq.lock, flags);
return HRTIMER_NORESTART;
}
@ -1368,14 +1374,13 @@ static void ioc_timer_fn(struct timer_list *timer)
* should have woken up in the last period and expire idle iocgs.
*/
list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) {
if (!waitqueue_active(&iocg->waitq) &&
!atomic64_read(&iocg->abs_vdebt) && !iocg_is_idle(iocg))
if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt &&
!iocg_is_idle(iocg))
continue;
spin_lock(&iocg->waitq.lock);
if (waitqueue_active(&iocg->waitq) ||
atomic64_read(&iocg->abs_vdebt)) {
if (waitqueue_active(&iocg->waitq) || iocg->abs_vdebt) {
/* might be oversleeping vtime / hweight changes, kick */
iocg_kick_waitq(iocg, &now);
iocg_kick_delay(iocg, &now, 0);
@ -1718,28 +1723,49 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
* tests are racy but the races aren't systemic - we only miss once
* in a while which is fine.
*/
if (!waitqueue_active(&iocg->waitq) &&
!atomic64_read(&iocg->abs_vdebt) &&
if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt &&
time_before_eq64(vtime + cost, now.vnow)) {
iocg_commit_bio(iocg, bio, cost);
return;
}
/*
* We're over budget. If @bio has to be issued regardless,
* remember the abs_cost instead of advancing vtime.
* iocg_kick_waitq() will pay off the debt before waking more IOs.
* We activated above but w/o any synchronization. Deactivation is
* synchronized with waitq.lock and we won't get deactivated as long
* as we're waiting or has debt, so we're good if we're activated
* here. In the unlikely case that we aren't, just issue the IO.
*/
spin_lock_irq(&iocg->waitq.lock);
if (unlikely(list_empty(&iocg->active_list))) {
spin_unlock_irq(&iocg->waitq.lock);
iocg_commit_bio(iocg, bio, cost);
return;
}
/*
* We're over budget. If @bio has to be issued regardless, remember
* the abs_cost instead of advancing vtime. iocg_kick_waitq() will pay
* off the debt before waking more IOs.
*
* This way, the debt is continuously paid off each period with the
* actual budget available to the cgroup. If we just wound vtime,
* we would incorrectly use the current hw_inuse for the entire
* amount which, for example, can lead to the cgroup staying
* blocked for a long time even with substantially raised hw_inuse.
* actual budget available to the cgroup. If we just wound vtime, we
* would incorrectly use the current hw_inuse for the entire amount
* which, for example, can lead to the cgroup staying blocked for a
* long time even with substantially raised hw_inuse.
*
* An iocg with vdebt should stay online so that the timer can keep
* deducting its vdebt and [de]activate use_delay mechanism
* accordingly. We don't want to race against the timer trying to
* clear them and leave @iocg inactive w/ dangling use_delay heavily
* penalizing the cgroup and its descendants.
*/
if (bio_issue_as_root_blkg(bio) || fatal_signal_pending(current)) {
atomic64_add(abs_cost, &iocg->abs_vdebt);
iocg->abs_vdebt += abs_cost;
if (iocg_kick_delay(iocg, &now, cost))
blkcg_schedule_throttle(rqos->q,
(bio->bi_opf & REQ_SWAP) == REQ_SWAP);
spin_unlock_irq(&iocg->waitq.lock);
return;
}
@ -1756,20 +1782,6 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
* All waiters are on iocg->waitq and the wait states are
* synchronized using waitq.lock.
*/
spin_lock_irq(&iocg->waitq.lock);
/*
* We activated above but w/o any synchronization. Deactivation is
* synchronized with waitq.lock and we won't get deactivated as
* long as we're waiting, so we're good if we're activated here.
* In the unlikely case that we are deactivated, just issue the IO.
*/
if (unlikely(list_empty(&iocg->active_list))) {
spin_unlock_irq(&iocg->waitq.lock);
iocg_commit_bio(iocg, bio, cost);
return;
}
init_waitqueue_func_entry(&wait.wait, iocg_wake_fn);
wait.wait.private = current;
wait.bio = bio;
@ -1801,6 +1813,7 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
struct ioc_now now;
u32 hw_inuse;
u64 abs_cost, cost;
unsigned long flags;
/* bypass if disabled or for root cgroup */
if (!ioc->enabled || !iocg->level)
@ -1820,15 +1833,28 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
iocg->cursor = bio_end;
/*
* Charge if there's enough vtime budget and the existing request
* has cost assigned. Otherwise, account it as debt. See debt
* handling in ioc_rqos_throttle() for details.
* Charge if there's enough vtime budget and the existing request has
* cost assigned.
*/
if (rq->bio && rq->bio->bi_iocost_cost &&
time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow))
time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) {
iocg_commit_bio(iocg, bio, cost);
else
atomic64_add(abs_cost, &iocg->abs_vdebt);
return;
}
/*
* Otherwise, account it as debt if @iocg is online, which it should
* be for the vast majority of cases. See debt handling in
* ioc_rqos_throttle() for details.
*/
spin_lock_irqsave(&iocg->waitq.lock, flags);
if (likely(!list_empty(&iocg->active_list))) {
iocg->abs_vdebt += abs_cost;
iocg_kick_delay(iocg, &now, cost);
} else {
iocg_commit_bio(iocg, bio, cost);
}
spin_unlock_irqrestore(&iocg->waitq.lock, flags);
}
static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio)
@ -1998,7 +2024,6 @@ static void ioc_pd_init(struct blkg_policy_data *pd)
iocg->ioc = ioc;
atomic64_set(&iocg->vtime, now.vnow);
atomic64_set(&iocg->done_vtime, now.vnow);
atomic64_set(&iocg->abs_vdebt, 0);
atomic64_set(&iocg->active_period, atomic64_read(&ioc->cur_period));
INIT_LIST_HEAD(&iocg->active_list);
iocg->hweight_active = HWEIGHT_WHOLE;

View File

@ -496,7 +496,7 @@ int blk_drop_partitions(struct gendisk *disk, struct block_device *bdev)
if (!disk_part_scan_enabled(disk))
return 0;
if (bdev->bd_part_count || bdev->bd_openers > 1)
if (bdev->bd_part_count)
return -EBUSY;
res = invalidate_partition(disk, 0);
if (res)

View File

@ -287,7 +287,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
crypto_free_skcipher(ctx->child);
}
static void free(struct skcipher_instance *inst)
static void free_inst(struct skcipher_instance *inst)
{
crypto_drop_skcipher(skcipher_instance_ctx(inst));
kfree(inst);
@ -400,12 +400,12 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.encrypt = encrypt;
inst->alg.decrypt = decrypt;
inst->free = free;
inst->free = free_inst;
err = skcipher_register_instance(tmpl, inst);
if (err) {
err_free_inst:
free(inst);
free_inst(inst);
}
return err;
}

View File

@ -322,7 +322,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
crypto_free_cipher(ctx->tweak);
}
static void free(struct skcipher_instance *inst)
static void free_inst(struct skcipher_instance *inst)
{
crypto_drop_skcipher(skcipher_instance_ctx(inst));
kfree(inst);
@ -434,12 +434,12 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.encrypt = encrypt;
inst->alg.decrypt = decrypt;
inst->free = free;
inst->free = free_inst;
err = skcipher_register_instance(tmpl, inst);
if (err) {
err_free_inst:
free(inst);
free_inst(inst);
}
return err;
}

View File

@ -273,13 +273,13 @@ int acpi_device_set_power(struct acpi_device *device, int state)
end:
if (result) {
dev_warn(&device->dev, "Failed to change power state to %s\n",
acpi_power_state_string(state));
acpi_power_state_string(target_state));
} else {
device->power.state = target_state;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Device [%s] transitioned to %s\n",
device->pnp.bus_id,
acpi_power_state_string(state)));
acpi_power_state_string(target_state)));
}
return result;

View File

@ -645,6 +645,7 @@ static void amba_device_initialize(struct amba_device *dev, const char *name)
dev->dev.release = amba_device_release;
dev->dev.bus = &amba_bustype;
dev->dev.dma_mask = &dev->dev.coherent_dma_mask;
dev->dev.dma_parms = &dev->dma_parms;
dev->res.name = dev_name(&dev->dev);
}

View File

@ -256,7 +256,8 @@ static int try_to_bring_up_master(struct master *master,
ret = master->ops->bind(master->dev);
if (ret < 0) {
devres_release_group(master->dev, NULL);
dev_info(master->dev, "master bind failed: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_info(master->dev, "master bind failed: %d\n", ret);
return ret;
}
@ -611,8 +612,9 @@ static int component_bind(struct component *component, struct master *master,
devres_release_group(component->dev, NULL);
devres_release_group(master->dev, NULL);
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
dev_name(component->dev), component->ops, ret);
if (ret != -EPROBE_DEFER)
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
dev_name(component->dev), component->ops, ret);
}
return ret;

View File

@ -2370,6 +2370,11 @@ u32 fw_devlink_get_flags(void)
return fw_devlink_flags;
}
static bool fw_devlink_is_permissive(void)
{
return fw_devlink_flags == DL_FLAG_SYNC_STATE_ONLY;
}
/**
* device_add - add device to device hierarchy.
* @dev: device.
@ -2524,7 +2529,7 @@ int device_add(struct device *dev)
if (fw_devlink_flags && is_fwnode_dev &&
fwnode_has_op(dev->fwnode, add_links)) {
fw_ret = fwnode_call_int_op(dev->fwnode, add_links, dev);
if (fw_ret == -ENODEV)
if (fw_ret == -ENODEV && !fw_devlink_is_permissive())
device_link_wait_for_mandatory_supplier(dev);
else if (fw_ret)
device_link_wait_for_optional_supplier(dev);

View File

@ -224,17 +224,9 @@ static int deferred_devs_show(struct seq_file *s, void *data)
}
DEFINE_SHOW_ATTRIBUTE(deferred_devs);
#ifdef CONFIG_MODULES
/*
* In the case of modules, set the default probe timeout to
* 30 seconds to give userland some time to load needed modules
*/
int driver_deferred_probe_timeout = 30;
#else
/* In the case of !modules, no probe timeout needed */
int driver_deferred_probe_timeout = -1;
#endif
int driver_deferred_probe_timeout;
EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout);
static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue);
static int __init deferred_probe_timeout_setup(char *str)
{
@ -266,8 +258,8 @@ int driver_deferred_probe_check_state(struct device *dev)
return -ENODEV;
}
if (!driver_deferred_probe_timeout) {
dev_WARN(dev, "deferred probe timeout, ignoring dependency");
if (!driver_deferred_probe_timeout && initcalls_done) {
dev_warn(dev, "deferred probe timeout, ignoring dependency");
return -ETIMEDOUT;
}
@ -284,6 +276,7 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)
dev_info(private->device, "deferred probe pending");
wake_up(&probe_timeout_waitqueue);
}
static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
@ -658,6 +651,9 @@ int driver_probe_done(void)
*/
void wait_for_device_probe(void)
{
/* wait for probe timeout */
wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout);
/* wait for the deferred probe workqueue to finish */
flush_work(&deferred_probe_work);

View File

@ -380,6 +380,8 @@ struct platform_object {
*/
static void setup_pdev_dma_masks(struct platform_device *pdev)
{
pdev->dev.dma_parms = &pdev->dma_parms;
if (!pdev->dev.coherent_dma_mask)
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
if (!pdev->dev.dma_mask) {

View File

@ -33,6 +33,15 @@ struct virtio_blk_vq {
} ____cacheline_aligned_in_smp;
struct virtio_blk {
/*
* This mutex must be held by anything that may run after
* virtblk_remove() sets vblk->vdev to NULL.
*
* blk-mq, virtqueue processing, and sysfs attribute code paths are
* shut down before vblk->vdev is set to NULL and therefore do not need
* to hold this mutex.
*/
struct mutex vdev_mutex;
struct virtio_device *vdev;
/* The disk structure for the kernel. */
@ -44,6 +53,13 @@ struct virtio_blk {
/* Process context for config space updates */
struct work_struct config_work;
/*
* Tracks references from block_device_operations open/release and
* virtio_driver probe/remove so this object can be freed once no
* longer in use.
*/
refcount_t refs;
/* What host tells us, plus 2 for header & tailer. */
unsigned int sg_elems;
@ -295,10 +311,55 @@ out:
return err;
}
static void virtblk_get(struct virtio_blk *vblk)
{
refcount_inc(&vblk->refs);
}
static void virtblk_put(struct virtio_blk *vblk)
{
if (refcount_dec_and_test(&vblk->refs)) {
ida_simple_remove(&vd_index_ida, vblk->index);
mutex_destroy(&vblk->vdev_mutex);
kfree(vblk);
}
}
static int virtblk_open(struct block_device *bd, fmode_t mode)
{
struct virtio_blk *vblk = bd->bd_disk->private_data;
int ret = 0;
mutex_lock(&vblk->vdev_mutex);
if (vblk->vdev)
virtblk_get(vblk);
else
ret = -ENXIO;
mutex_unlock(&vblk->vdev_mutex);
return ret;
}
static void virtblk_release(struct gendisk *disk, fmode_t mode)
{
struct virtio_blk *vblk = disk->private_data;
virtblk_put(vblk);
}
/* We provide getgeo only to please some old bootloader/partitioning tools */
static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
{
struct virtio_blk *vblk = bd->bd_disk->private_data;
int ret = 0;
mutex_lock(&vblk->vdev_mutex);
if (!vblk->vdev) {
ret = -ENXIO;
goto out;
}
/* see if the host passed in geometry config */
if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {
@ -314,11 +375,15 @@ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
geo->sectors = 1 << 5;
geo->cylinders = get_capacity(bd->bd_disk) >> 11;
}
return 0;
out:
mutex_unlock(&vblk->vdev_mutex);
return ret;
}
static const struct block_device_operations virtblk_fops = {
.owner = THIS_MODULE,
.open = virtblk_open,
.release = virtblk_release,
.getgeo = virtblk_getgeo,
};
@ -655,6 +720,10 @@ static int virtblk_probe(struct virtio_device *vdev)
goto out_free_index;
}
/* This reference is dropped in virtblk_remove(). */
refcount_set(&vblk->refs, 1);
mutex_init(&vblk->vdev_mutex);
vblk->vdev = vdev;
vblk->sg_elems = sg_elems;
@ -820,8 +889,6 @@ out:
static void virtblk_remove(struct virtio_device *vdev)
{
struct virtio_blk *vblk = vdev->priv;
int index = vblk->index;
int refc;
/* Make sure no work handler is accessing the device. */
flush_work(&vblk->config_work);
@ -831,18 +898,21 @@ static void virtblk_remove(struct virtio_device *vdev)
blk_mq_free_tag_set(&vblk->tag_set);
mutex_lock(&vblk->vdev_mutex);
/* Stop all the virtqueues. */
vdev->config->reset(vdev);
refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref);
/* Virtqueues are stopped, nothing can use vblk->vdev anymore. */
vblk->vdev = NULL;
put_disk(vblk->disk);
vdev->config->del_vqs(vdev);
kfree(vblk->vqs);
kfree(vblk);
/* Only free device id if we don't have any users */
if (refc == 1)
ida_simple_remove(&vd_index_ida, index);
mutex_unlock(&vblk->vdev_mutex);
virtblk_put(vblk);
}
#ifdef CONFIG_PM_SLEEP

View File

@ -812,10 +812,9 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
if (!mhi_cntrl)
return -EINVAL;
if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
return -EINVAL;
if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
!mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||
!mhi_cntrl->write_reg)
return -EINVAL;
ret = parse_config(mhi_cntrl, config);

View File

@ -11,9 +11,6 @@
extern struct bus_type mhi_bus_type;
/* MHI MMIO register mapping */
#define PCI_INVALID_READ(val) (val == U32_MAX)
#define MHIREGLEN (0x0)
#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
#define MHIREGLEN_MHIREGLEN_SHIFT (0)

View File

@ -18,16 +18,7 @@
int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 *out)
{
u32 tmp = readl(base + offset);
/* If there is any unexpected value, query the link status */
if (PCI_INVALID_READ(tmp) &&
mhi_cntrl->link_status(mhi_cntrl))
return -EIO;
*out = tmp;
return 0;
return mhi_cntrl->read_reg(mhi_cntrl, base + offset, out);
}
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
@ -49,7 +40,7 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val)
{
writel(val, base + offset);
mhi_cntrl->write_reg(mhi_cntrl, base + offset, val);
}
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
@ -294,7 +285,7 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
!(mhi_chan->ee_mask & BIT(mhi_cntrl->ee)))
continue;
mhi_dev = mhi_alloc_device(mhi_cntrl);
if (!mhi_dev)
if (IS_ERR(mhi_dev))
return;
mhi_dev->dev_type = MHI_DEVICE_XFER;
@ -336,7 +327,8 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
/* Channel name is same for both UL and DL */
mhi_dev->chan_name = mhi_chan->name;
dev_set_name(&mhi_dev->dev, "%04x_%s", mhi_chan->chan,
dev_set_name(&mhi_dev->dev, "%s_%s",
dev_name(mhi_cntrl->cntrl_dev),
mhi_dev->chan_name);
/* Init wakeup source if available */

View File

@ -902,7 +902,11 @@ int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO;
ret = (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -ETIMEDOUT;
if (ret)
mhi_power_down(mhi_cntrl, false);
return ret;
}
EXPORT_SYMBOL(mhi_sync_power_up);

View File

@ -1059,7 +1059,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
update_turbo_state();
if (global.turbo_disabled) {
pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n");
mutex_unlock(&intel_pstate_limits_lock);
mutex_unlock(&intel_pstate_driver_lock);
return -EPERM;

View File

@ -963,10 +963,12 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
struct aead_edesc *edesc;
int ecode = 0;
bool has_bklog;
dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
edesc = rctx->edesc;
has_bklog = edesc->bklog;
if (err)
ecode = caam_jr_strstatus(jrdev, err);
@ -979,7 +981,7 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
if (!has_bklog)
aead_request_complete(req, ecode);
else
crypto_finalize_aead_request(jrp->engine, req, ecode);
@ -995,10 +997,12 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
int ivsize = crypto_skcipher_ivsize(skcipher);
int ecode = 0;
bool has_bklog;
dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
edesc = rctx->edesc;
has_bklog = edesc->bklog;
if (err)
ecode = caam_jr_strstatus(jrdev, err);
@ -1028,7 +1032,7 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
if (!has_bklog)
skcipher_request_complete(req, ecode);
else
crypto_finalize_skcipher_request(jrp->engine, req, ecode);
@ -1711,7 +1715,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
if (ivsize || mapped_dst_nents > 1)
sg_to_sec4_set_last(edesc->sec4_sg + dst_sg_idx +
mapped_dst_nents);
mapped_dst_nents - 1 + !!ivsize);
if (sec4_sg_bytes) {
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,

View File

@ -583,10 +583,12 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
struct caam_hash_state *state = ahash_request_ctx(req);
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
int ecode = 0;
bool has_bklog;
dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
edesc = state->edesc;
has_bklog = edesc->bklog;
if (err)
ecode = caam_jr_strstatus(jrdev, err);
@ -603,7 +605,7 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
if (!has_bklog)
req->base.complete(&req->base, ecode);
else
crypto_finalize_hash_request(jrp->engine, req, ecode);
@ -632,10 +634,12 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
struct caam_hash_state *state = ahash_request_ctx(req);
int digestsize = crypto_ahash_digestsize(ahash);
int ecode = 0;
bool has_bklog;
dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
edesc = state->edesc;
has_bklog = edesc->bklog;
if (err)
ecode = caam_jr_strstatus(jrdev, err);
@ -663,7 +667,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
if (!has_bklog)
req->base.complete(&req->base, ecode);
else
crypto_finalize_hash_request(jrp->engine, req, ecode);

View File

@ -121,11 +121,13 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
struct rsa_edesc *edesc;
int ecode = 0;
bool has_bklog;
if (err)
ecode = caam_jr_strstatus(dev, err);
edesc = req_ctx->edesc;
has_bklog = edesc->bklog;
rsa_pub_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
@ -135,7 +137,7 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
if (!has_bklog)
akcipher_request_complete(req, ecode);
else
crypto_finalize_akcipher_request(jrp->engine, req, ecode);
@ -152,11 +154,13 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct rsa_edesc *edesc;
int ecode = 0;
bool has_bklog;
if (err)
ecode = caam_jr_strstatus(dev, err);
edesc = req_ctx->edesc;
has_bklog = edesc->bklog;
switch (key->priv_form) {
case FORM1:
@ -176,7 +180,7 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
* If no backlog flag, the completion of the request is done
* by CAAM, not crypto engine.
*/
if (!edesc->bklog)
if (!has_bklog)
akcipher_request_complete(req, ecode);
else
crypto_finalize_akcipher_request(jrp->engine, req, ecode);

View File

@ -673,41 +673,14 @@ int chcr_ktls_cpl_set_tcb_rpl(struct adapter *adap, unsigned char *input)
return 0;
}
/*
* chcr_write_cpl_set_tcb_ulp: update tcb values.
* TCB is responsible to create tcp headers, so all the related values
* should be correctly updated.
* @tx_info - driver specific tls info.
* @q - tx queue on which packet is going out.
* @tid - TCB identifier.
* @pos - current index where should we start writing.
* @word - TCB word.
* @mask - TCB word related mask.
* @val - TCB word related value.
* @reply - set 1 if looking for TP response.
* return - next position to write.
*/
static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
struct sge_eth_txq *q, u32 tid,
void *pos, u16 word, u64 mask,
static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
u32 tid, void *pos, u16 word, u64 mask,
u64 val, u32 reply)
{
struct cpl_set_tcb_field_core *cpl;
struct ulptx_idata *idata;
struct ulp_txpkt *txpkt;
void *save_pos = NULL;
u8 buf[48] = {0};
int left;
left = (void *)q->q.stat - pos;
if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {
if (!left) {
pos = q->q.desc;
} else {
save_pos = pos;
pos = buf;
}
}
/* ULP_TXPKT */
txpkt = pos;
txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | ULP_TXPKT_DEST_V(0));
@ -732,18 +705,54 @@ static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
idata = (struct ulptx_idata *)(cpl + 1);
idata->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP));
idata->len = htonl(0);
pos = idata + 1;
if (save_pos) {
pos = chcr_copy_to_txd(buf, &q->q, save_pos,
CHCR_SET_TCB_FIELD_LEN);
} else {
/* check again if we are at the end of the queue */
if (left == CHCR_SET_TCB_FIELD_LEN)
return pos;
}
/*
* chcr_write_cpl_set_tcb_ulp: update tcb values.
* TCB is responsible to create tcp headers, so all the related values
* should be correctly updated.
* @tx_info - driver specific tls info.
* @q - tx queue on which packet is going out.
* @tid - TCB identifier.
* @pos - current index where should we start writing.
* @word - TCB word.
* @mask - TCB word related mask.
* @val - TCB word related value.
* @reply - set 1 if looking for TP response.
* return - next position to write.
*/
static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
struct sge_eth_txq *q, u32 tid,
void *pos, u16 word, u64 mask,
u64 val, u32 reply)
{
int left = (void *)q->q.stat - pos;
if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {
if (!left) {
pos = q->q.desc;
else
pos = idata + 1;
} else {
u8 buf[48] = {0};
__chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word,
mask, val, reply);
return chcr_copy_to_txd(buf, &q->q, pos,
CHCR_SET_TCB_FIELD_LEN);
}
}
pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word,
mask, val, reply);
/* check again if we are at the end of the queue */
if (left == CHCR_SET_TCB_FIELD_LEN)
pos = q->q.desc;
return pos;
}

View File

@ -388,7 +388,8 @@ static long dma_buf_ioctl(struct file *file,
return ret;
case DMA_BUF_SET_NAME:
case DMA_BUF_SET_NAME_A:
case DMA_BUF_SET_NAME_B:
return dma_buf_set_name(dmabuf, (const char __user *)arg);
default:
@ -655,8 +656,8 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
* calls attach() of dma_buf_ops to allow device-specific attach functionality
* @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached.
* @importer_ops [in] importer operations for the attachment
* @importer_priv [in] importer private pointer for the attachment
* @importer_ops: [in] importer operations for the attachment
* @importer_priv: [in] importer private pointer for the attachment
*
* Returns struct dma_buf_attachment pointer for this attachment. Attachments
* must be cleaned up by calling dma_buf_detach().

View File

@ -241,7 +241,8 @@ config FSL_RAID
config HISI_DMA
tristate "HiSilicon DMA Engine support"
depends on ARM64 || (COMPILE_TEST && PCI_MSI)
depends on ARM64 || COMPILE_TEST
depends on PCI_MSI
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help

View File

@ -232,10 +232,6 @@ static void chan_dev_release(struct device *dev)
struct dma_chan_dev *chan_dev;
chan_dev = container_of(dev, typeof(*chan_dev), device);
if (atomic_dec_and_test(chan_dev->idr_ref)) {
ida_free(&dma_ida, chan_dev->dev_id);
kfree(chan_dev->idr_ref);
}
kfree(chan_dev);
}
@ -1043,27 +1039,9 @@ static int get_dma_id(struct dma_device *device)
}
static int __dma_async_device_channel_register(struct dma_device *device,
struct dma_chan *chan,
int chan_id)
struct dma_chan *chan)
{
int rc = 0;
int chancnt = device->chancnt;
atomic_t *idr_ref;
struct dma_chan *tchan;
tchan = list_first_entry_or_null(&device->channels,
struct dma_chan, device_node);
if (!tchan)
return -ENODEV;
if (tchan->dev) {
idr_ref = tchan->dev->idr_ref;
} else {
idr_ref = kmalloc(sizeof(*idr_ref), GFP_KERNEL);
if (!idr_ref)
return -ENOMEM;
atomic_set(idr_ref, 0);
}
chan->local = alloc_percpu(typeof(*chan->local));
if (!chan->local)
@ -1079,29 +1057,36 @@ static int __dma_async_device_channel_register(struct dma_device *device,
* When the chan_id is a negative value, we are dynamically adding
* the channel. Otherwise we are static enumerating.
*/
chan->chan_id = chan_id < 0 ? chancnt : chan_id;
mutex_lock(&device->chan_mutex);
chan->chan_id = ida_alloc(&device->chan_ida, GFP_KERNEL);
mutex_unlock(&device->chan_mutex);
if (chan->chan_id < 0) {
pr_err("%s: unable to alloc ida for chan: %d\n",
__func__, chan->chan_id);
goto err_out;
}
chan->dev->device.class = &dma_devclass;
chan->dev->device.parent = device->dev;
chan->dev->chan = chan;
chan->dev->idr_ref = idr_ref;
chan->dev->dev_id = device->dev_id;
atomic_inc(idr_ref);
dev_set_name(&chan->dev->device, "dma%dchan%d",
device->dev_id, chan->chan_id);
rc = device_register(&chan->dev->device);
if (rc)
goto err_out;
goto err_out_ida;
chan->client_count = 0;
device->chancnt = chan->chan_id + 1;
device->chancnt++;
return 0;
err_out_ida:
mutex_lock(&device->chan_mutex);
ida_free(&device->chan_ida, chan->chan_id);
mutex_unlock(&device->chan_mutex);
err_out:
free_percpu(chan->local);
kfree(chan->dev);
if (atomic_dec_return(idr_ref) == 0)
kfree(idr_ref);
return rc;
}
@ -1110,7 +1095,7 @@ int dma_async_device_channel_register(struct dma_device *device,
{
int rc;
rc = __dma_async_device_channel_register(device, chan, -1);
rc = __dma_async_device_channel_register(device, chan);
if (rc < 0)
return rc;
@ -1130,6 +1115,9 @@ static void __dma_async_device_channel_unregister(struct dma_device *device,
device->chancnt--;
chan->dev->chan = NULL;
mutex_unlock(&dma_list_mutex);
mutex_lock(&device->chan_mutex);
ida_free(&device->chan_ida, chan->chan_id);
mutex_unlock(&device->chan_mutex);
device_unregister(&chan->dev->device);
free_percpu(chan->local);
}
@ -1152,7 +1140,7 @@ EXPORT_SYMBOL_GPL(dma_async_device_channel_unregister);
*/
int dma_async_device_register(struct dma_device *device)
{
int rc, i = 0;
int rc;
struct dma_chan* chan;
if (!device)
@ -1257,9 +1245,12 @@ int dma_async_device_register(struct dma_device *device)
if (rc != 0)
return rc;
mutex_init(&device->chan_mutex);
ida_init(&device->chan_ida);
/* represent channels in sysfs. Probably want devs too */
list_for_each_entry(chan, &device->channels, device_node) {
rc = __dma_async_device_channel_register(device, chan, i++);
rc = __dma_async_device_channel_register(device, chan);
if (rc < 0)
goto err_out;
}
@ -1334,6 +1325,7 @@ void dma_async_device_unregister(struct dma_device *device)
*/
dma_cap_set(DMA_PRIVATE, device->cap_mask);
dma_channel_rebalance();
ida_free(&dma_ida, device->dev_id);
dma_device_put(device);
mutex_unlock(&dma_list_mutex);
}

View File

@ -240,7 +240,7 @@ static bool is_threaded_test_run(struct dmatest_info *info)
struct dmatest_thread *thread;
list_for_each_entry(thread, &dtc->threads, node) {
if (!thread->done)
if (!thread->done && !thread->pending)
return true;
}
}
@ -662,8 +662,8 @@ static int dmatest_func(void *data)
flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
ktime = ktime_get();
while (!kthread_should_stop()
&& !(params->iterations && total_tests >= params->iterations)) {
while (!(kthread_should_stop() ||
(params->iterations && total_tests >= params->iterations))) {
struct dma_async_tx_descriptor *tx = NULL;
struct dmaengine_unmap_data *um;
dma_addr_t *dsts;

View File

@ -363,6 +363,8 @@ static void mmp_tdma_free_descriptor(struct mmp_tdma_chan *tdmac)
gen_pool_free(gpool, (unsigned long)tdmac->desc_arr,
size);
tdmac->desc_arr = NULL;
if (tdmac->status == DMA_ERROR)
tdmac->status = DMA_COMPLETE;
return;
}
@ -443,7 +445,8 @@ static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
if (!desc)
goto err_out;
mmp_tdma_config_write(chan, direction, &tdmac->slave_config);
if (mmp_tdma_config_write(chan, direction, &tdmac->slave_config))
goto err_out;
while (buf < buf_len) {
desc = &tdmac->desc_arr[i];

View File

@ -865,6 +865,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
}
pci_set_master(pdev);
pd->dma.dev = &pdev->dev;
err = request_irq(pdev->irq, pd_irq, IRQF_SHARED, DRV_NAME, pd);
if (err) {
@ -880,7 +881,6 @@ static int pch_dma_probe(struct pci_dev *pdev,
goto err_free_irq;
}
pd->dma.dev = &pdev->dev;
INIT_LIST_HEAD(&pd->dma.channels);

View File

@ -816,6 +816,13 @@ static bool tegra_dma_eoc_interrupt_deasserted(struct tegra_dma_channel *tdc)
static void tegra_dma_synchronize(struct dma_chan *dc)
{
struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc);
int err;
err = pm_runtime_get_sync(tdc->tdma->dev);
if (err < 0) {
dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err);
return;
}
/*
* CPU, which handles interrupt, could be busy in
@ -825,6 +832,8 @@ static void tegra_dma_synchronize(struct dma_chan *dc)
wait_event(tdc->wq, tegra_dma_eoc_interrupt_deasserted(tdc));
tasklet_kill(&tdc->tasklet);
pm_runtime_put(tdc->tdma->dev);
}
static unsigned int tegra_dma_sg_bytes_xferred(struct tegra_dma_channel *tdc,

View File

@ -27,6 +27,7 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id)
soc_ep_map = &j721e_ep_map;
} else {
pr_err("PSIL: No compatible machine found for map\n");
mutex_unlock(&ep_map_mutex);
return ERR_PTR(-ENOTSUPP);
}
pr_debug("%s: Using map for %s\n", __func__, soc_ep_map->name);

View File

@ -1230,16 +1230,16 @@ static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
return ret;
spin_lock_irqsave(&chan->lock, flags);
desc = list_last_entry(&chan->active_list,
struct xilinx_dma_tx_descriptor, node);
/*
* VDMA and simple mode do not support residue reporting, so the
* residue field will always be 0.
*/
if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)
residue = xilinx_dma_get_residue(chan, desc);
if (!list_empty(&chan->active_list)) {
desc = list_last_entry(&chan->active_list,
struct xilinx_dma_tx_descriptor, node);
/*
* VDMA and simple mode do not support residue reporting, so the
* residue field will always be 0.
*/
if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)
residue = xilinx_dma_get_residue(chan, desc);
}
spin_unlock_irqrestore(&chan->lock, flags);
dma_set_residue(txstate, residue);

View File

@ -16,7 +16,7 @@
int efi_tpm_final_log_size;
EXPORT_SYMBOL(efi_tpm_final_log_size);
static int tpm2_calc_event_log_size(void *data, int count, void *size_info)
static int __init tpm2_calc_event_log_size(void *data, int count, void *size_info)
{
struct tcg_pcr_event2_head *header;
int event_size, size = 0;

View File

@ -3372,15 +3372,12 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
}
}
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
amdgpu_amdkfd_suspend(adev, !fbcon);
amdgpu_ras_suspend(adev);
r = amdgpu_device_ip_suspend_phase1(adev);
amdgpu_amdkfd_suspend(adev, !fbcon);
/* evict vram memory */
amdgpu_bo_evict_vram(adev);

View File

@ -85,9 +85,10 @@
* - 3.34.0 - Non-DC can flip correctly between buffers with different pitches
* - 3.35.0 - Add drm_amdgpu_info_device::tcc_disabled_mask
* - 3.36.0 - Allow reading more status registers on si/cik
* - 3.37.0 - L2 is invalidated before SDMA IBs, needed for correctness
*/
#define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 36
#define KMS_DRIVER_MINOR 37
#define KMS_DRIVER_PATCHLEVEL 0
int amdgpu_vram_limit = 0;

View File

@ -73,6 +73,22 @@
#define SDMA_OP_AQL_COPY 0
#define SDMA_OP_AQL_BARRIER_OR 0
#define SDMA_GCR_RANGE_IS_PA (1 << 18)
#define SDMA_GCR_SEQ(x) (((x) & 0x3) << 16)
#define SDMA_GCR_GL2_WB (1 << 15)
#define SDMA_GCR_GL2_INV (1 << 14)
#define SDMA_GCR_GL2_DISCARD (1 << 13)
#define SDMA_GCR_GL2_RANGE(x) (((x) & 0x3) << 11)
#define SDMA_GCR_GL2_US (1 << 10)
#define SDMA_GCR_GL1_INV (1 << 9)
#define SDMA_GCR_GLV_INV (1 << 8)
#define SDMA_GCR_GLK_INV (1 << 7)
#define SDMA_GCR_GLK_WB (1 << 6)
#define SDMA_GCR_GLM_INV (1 << 5)
#define SDMA_GCR_GLM_WB (1 << 4)
#define SDMA_GCR_GL1_RANGE(x) (((x) & 0x3) << 2)
#define SDMA_GCR_GLI_INV(x) (((x) & 0x3) << 0)
/*define for op field*/
#define SDMA_PKT_HEADER_op_offset 0
#define SDMA_PKT_HEADER_op_mask 0x000000FF

View File

@ -382,6 +382,18 @@ static void sdma_v5_0_ring_emit_ib(struct amdgpu_ring *ring,
unsigned vmid = AMDGPU_JOB_GET_VMID(job);
uint64_t csa_mc_addr = amdgpu_sdma_get_csa_mc_addr(ring, vmid);
/* Invalidate L2, because if we don't do it, we might get stale cache
* lines from previous IBs.
*/
amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_GCR_REQ));
amdgpu_ring_write(ring, 0);
amdgpu_ring_write(ring, (SDMA_GCR_GL2_INV |
SDMA_GCR_GL2_WB |
SDMA_GCR_GLM_INV |
SDMA_GCR_GLM_WB) << 16);
amdgpu_ring_write(ring, 0xffffff80);
amdgpu_ring_write(ring, 0xffff);
/* An IB packet must end on a 8 DW boundary--the next dword
* must be on a 8-dword boundary. Our IB packet below is 6
* dwords long, thus add x number of NOPs, such that, in
@ -1595,7 +1607,7 @@ static const struct amdgpu_ring_funcs sdma_v5_0_ring_funcs = {
SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 * 2 +
10 + 10 + 10, /* sdma_v5_0_ring_emit_fence x3 for user fence, vm fence */
.emit_ib_size = 7 + 6, /* sdma_v5_0_ring_emit_ib */
.emit_ib_size = 5 + 7 + 6, /* sdma_v5_0_ring_emit_ib */
.emit_ib = sdma_v5_0_ring_emit_ib,
.emit_fence = sdma_v5_0_ring_emit_fence,
.emit_pipeline_sync = sdma_v5_0_ring_emit_pipeline_sync,

View File

@ -2008,17 +2008,22 @@ void amdgpu_dm_update_connector_after_detect(
dc_sink_retain(aconnector->dc_sink);
if (sink->dc_edid.length == 0) {
aconnector->edid = NULL;
drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
if (aconnector->dc_link->aux_mode) {
drm_dp_cec_unset_edid(
&aconnector->dm_dp_aux.aux);
}
} else {
aconnector->edid =
(struct edid *) sink->dc_edid.raw_edid;
(struct edid *)sink->dc_edid.raw_edid;
drm_connector_update_edid_property(connector,
aconnector->edid);
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
aconnector->edid);
aconnector->edid);
if (aconnector->dc_link->aux_mode)
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
aconnector->edid);
}
amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
update_connector_ext_caps(aconnector);
} else {
@ -3340,7 +3345,8 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
const union dc_tiling_info *tiling_info,
const uint64_t info,
struct dc_plane_dcc_param *dcc,
struct dc_plane_address *address)
struct dc_plane_address *address,
bool force_disable_dcc)
{
struct dc *dc = adev->dm.dc;
struct dc_dcc_surface_param input;
@ -3352,6 +3358,9 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
memset(&input, 0, sizeof(input));
memset(&output, 0, sizeof(output));
if (force_disable_dcc)
return 0;
if (!offset)
return 0;
@ -3401,7 +3410,8 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,
union dc_tiling_info *tiling_info,
struct plane_size *plane_size,
struct dc_plane_dcc_param *dcc,
struct dc_plane_address *address)
struct dc_plane_address *address,
bool force_disable_dcc)
{
const struct drm_framebuffer *fb = &afb->base;
int ret;
@ -3507,7 +3517,8 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,
ret = fill_plane_dcc_attributes(adev, afb, format, rotation,
plane_size, tiling_info,
tiling_flags, dcc, address);
tiling_flags, dcc, address,
force_disable_dcc);
if (ret)
return ret;
}
@ -3599,7 +3610,8 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
const struct drm_plane_state *plane_state,
const uint64_t tiling_flags,
struct dc_plane_info *plane_info,
struct dc_plane_address *address)
struct dc_plane_address *address,
bool force_disable_dcc)
{
const struct drm_framebuffer *fb = plane_state->fb;
const struct amdgpu_framebuffer *afb =
@ -3681,7 +3693,8 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
plane_info->rotation, tiling_flags,
&plane_info->tiling_info,
&plane_info->plane_size,
&plane_info->dcc, address);
&plane_info->dcc, address,
force_disable_dcc);
if (ret)
return ret;
@ -3704,6 +3717,7 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
struct dc_plane_info plane_info;
uint64_t tiling_flags;
int ret;
bool force_disable_dcc = false;
ret = fill_dc_scaling_info(plane_state, &scaling_info);
if (ret)
@ -3718,9 +3732,11 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
if (ret)
return ret;
force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
ret = fill_dc_plane_info_and_addr(adev, plane_state, tiling_flags,
&plane_info,
&dc_plane_state->address);
&dc_plane_state->address,
force_disable_dcc);
if (ret)
return ret;
@ -5342,6 +5358,7 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane,
uint64_t tiling_flags;
uint32_t domain;
int r;
bool force_disable_dcc = false;
dm_plane_state_old = to_dm_plane_state(plane->state);
dm_plane_state_new = to_dm_plane_state(new_state);
@ -5400,11 +5417,13 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane,
dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
fill_plane_buffer_attributes(
adev, afb, plane_state->format, plane_state->rotation,
tiling_flags, &plane_state->tiling_info,
&plane_state->plane_size, &plane_state->dcc,
&plane_state->address);
&plane_state->address,
force_disable_dcc);
}
return 0;
@ -6676,7 +6695,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
fill_dc_plane_info_and_addr(
dm->adev, new_plane_state, tiling_flags,
&bundle->plane_infos[planes_count],
&bundle->flip_addrs[planes_count].address);
&bundle->flip_addrs[planes_count].address,
false);
DRM_DEBUG_DRIVER("plane: id=%d dcc_en=%d\n",
new_plane_state->plane->index,
bundle->plane_infos[planes_count].dcc.enable);
bundle->surface_updates[planes_count].plane_info =
&bundle->plane_infos[planes_count];
@ -8096,7 +8120,8 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm,
ret = fill_dc_plane_info_and_addr(
dm->adev, new_plane_state, tiling_flags,
plane_info,
&flip_addr->address);
&flip_addr->address,
false);
if (ret)
goto cleanup;

View File

@ -834,11 +834,10 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
static void wait_for_no_pipes_pending(struct dc *dc, struct dc_state *context)
{
int i;
int count = 0;
struct pipe_ctx *pipe;
PERF_TRACE();
for (i = 0; i < MAX_PIPES; i++) {
pipe = &context->res_ctx.pipe_ctx[i];
int count = 0;
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
if (!pipe->plane_state)
continue;

View File

@ -2908,6 +2908,12 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
sizeof(hpd_irq_dpcd_data),
"Status: ");
for (i = 0; i < MAX_PIPES; i++) {
pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
link->dc->hwss.blank_stream(pipe_ctx);
}
for (i = 0; i < MAX_PIPES; i++) {
pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
@ -2927,6 +2933,12 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
dc_link_reallocate_mst_payload(link);
for (i = 0; i < MAX_PIPES; i++) {
pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
link->dc->hwss.unblank_stream(pipe_ctx, &previous_link_settings);
}
status = false;
if (out_link_loss)
*out_link_loss = true;
@ -4227,6 +4239,21 @@ void dp_set_fec_enable(struct dc_link *link, bool enable)
void dpcd_set_source_specific_data(struct dc_link *link)
{
const uint32_t post_oui_delay = 30; // 30ms
uint8_t dspc = 0;
enum dc_status ret = DC_ERROR_UNEXPECTED;
ret = core_link_read_dpcd(link, DP_DOWN_STREAM_PORT_COUNT, &dspc,
sizeof(dspc));
if (ret != DC_OK) {
DC_LOG_ERROR("Error in DP aux read transaction,"
" not writing source specific data\n");
return;
}
/* Return if OUI unsupported */
if (!(dspc & DP_OUI_SUPPORT))
return;
if (!link->dc->vendor_signature.is_valid) {
struct dpcd_amd_signature amd_signature;

View File

@ -231,34 +231,6 @@ struct dc_stream_status *dc_stream_get_status(
return dc_stream_get_status_from_state(dc->current_state, stream);
}
static void delay_cursor_until_vupdate(struct pipe_ctx *pipe_ctx, struct dc *dc)
{
#if defined(CONFIG_DRM_AMD_DC_DCN)
unsigned int vupdate_line;
unsigned int lines_to_vupdate, us_to_vupdate, vpos, nvpos;
struct dc_stream_state *stream = pipe_ctx->stream;
unsigned int us_per_line;
if (stream->ctx->asic_id.chip_family == FAMILY_RV &&
ASICREV_IS_RAVEN(stream->ctx->asic_id.hw_internal_rev)) {
vupdate_line = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
if (!dc_stream_get_crtc_position(dc, &stream, 1, &vpos, &nvpos))
return;
if (vpos >= vupdate_line)
return;
us_per_line = stream->timing.h_total * 10000 / stream->timing.pix_clk_100hz;
lines_to_vupdate = vupdate_line - vpos;
us_to_vupdate = lines_to_vupdate * us_per_line;
/* 70 us is a conservative estimate of cursor update time*/
if (us_to_vupdate < 70)
udelay(us_to_vupdate);
}
#endif
}
/**
* dc_stream_set_cursor_attributes() - Update cursor attributes and set cursor surface address
@ -298,9 +270,7 @@ bool dc_stream_set_cursor_attributes(
if (!pipe_to_program) {
pipe_to_program = pipe_ctx;
delay_cursor_until_vupdate(pipe_ctx, dc);
dc->hwss.pipe_control_lock(dc, pipe_to_program, true);
dc->hwss.cursor_lock(dc, pipe_to_program, true);
}
dc->hwss.set_cursor_attribute(pipe_ctx);
@ -309,7 +279,7 @@ bool dc_stream_set_cursor_attributes(
}
if (pipe_to_program)
dc->hwss.pipe_control_lock(dc, pipe_to_program, false);
dc->hwss.cursor_lock(dc, pipe_to_program, false);
return true;
}
@ -349,16 +319,14 @@ bool dc_stream_set_cursor_position(
if (!pipe_to_program) {
pipe_to_program = pipe_ctx;
delay_cursor_until_vupdate(pipe_ctx, dc);
dc->hwss.pipe_control_lock(dc, pipe_to_program, true);
dc->hwss.cursor_lock(dc, pipe_to_program, true);
}
dc->hwss.set_cursor_position(pipe_ctx);
}
if (pipe_to_program)
dc->hwss.pipe_control_lock(dc, pipe_to_program, false);
dc->hwss.cursor_lock(dc, pipe_to_program, false);
return true;
}

Some files were not shown because too many files have changed in this diff Show More