Merge branch 'mlx5-rd-sgl' into rdma.git for-next

From Yamin Friedman:

====================
This series from Yamin implements long standing "TODO" existed in rw.c. It
allows the driver to specify a cut-over point where it is faster to build
a lkey MR rather than do a large SGL for RDMA READ operations.

mlx5 HW gets a notable performane boost by switching to MRs.
====================

Based on the mlx5-next branch from
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux for
dependencies

* branch 'mlx5-rd-sgl': (3 commits)
  RDMA/mlx5: Add capability for max sge to get optimized performance
  RDMA/rw: Support threshold for registration vs scattering to local pages
  net/mlx5: Expose optimal performance scatter entries capability
This commit is contained in:
Jason Gunthorpe 2019-10-22 14:27:25 -03:00
commit a2aca4d7f0
297 changed files with 2654 additions and 1818 deletions

View File

@ -85,4 +85,5 @@ examples:
<&pd IMX_SC_R_DSP_RAM>; <&pd IMX_SC_R_DSP_RAM>;
mbox-names = "txdb0", "txdb1", "rxdb0", "rxdb1"; mbox-names = "txdb0", "txdb1", "rxdb0", "rxdb1";
mboxes = <&lsio_mu13 2 0>, <&lsio_mu13 2 1>, <&lsio_mu13 3 0>, <&lsio_mu13 3 1>; mboxes = <&lsio_mu13 2 0>, <&lsio_mu13 2 1>, <&lsio_mu13 3 0>, <&lsio_mu13 3 1>;
memory-region = <&dsp_reserved>;
}; };

View File

@ -43,13 +43,9 @@ properties:
dvdd-supply: dvdd-supply:
description: DVdd voltage supply description: DVdd voltage supply
items:
- const: dvdd
avdd-supply: avdd-supply:
description: AVdd voltage supply description: AVdd voltage supply
items:
- const: avdd
adi,rejection-60-Hz-enable: adi,rejection-60-Hz-enable:
description: | description: |
@ -99,6 +95,9 @@ required:
examples: examples:
- | - |
spi0 { spi0 {
#address-cells = <1>;
#size-cells = <0>;
adc@0 { adc@0 {
compatible = "adi,ad7192"; compatible = "adi,ad7192";
reg = <0>; reg = <0>;

View File

@ -73,7 +73,6 @@ properties:
- rc-genius-tvgo-a11mce - rc-genius-tvgo-a11mce
- rc-gotview7135 - rc-gotview7135
- rc-hauppauge - rc-hauppauge
- rc-hauppauge
- rc-hisi-poplar - rc-hisi-poplar
- rc-hisi-tv-demo - rc-hisi-tv-demo
- rc-imon-mce - rc-imon-mce

View File

@ -37,7 +37,7 @@ properties:
- description: exclusive PHY reset line - description: exclusive PHY reset line
- description: shared reset line between the PCIe PHY and PCIe controller - description: shared reset line between the PCIe PHY and PCIe controller
resets-names: reset-names:
items: items:
- const: phy - const: phy
- const: pcie - const: pcie

View File

@ -954,11 +954,6 @@ When kbuild executes, the following steps are followed (roughly):
From commandline LDFLAGS_MODULE shall be used (see kbuild.txt). From commandline LDFLAGS_MODULE shall be used (see kbuild.txt).
KBUILD_ARFLAGS Options for $(AR) when creating archives
$(KBUILD_ARFLAGS) set by the top level Makefile to "D" (deterministic
mode) if this option is supported by $(AR).
KBUILD_LDS KBUILD_LDS
The linker script with full path. Assigned by the top-level Makefile. The linker script with full path. Assigned by the top-level Makefile.

View File

@ -498,10 +498,11 @@ build.
will be written containing all exported symbols that were not will be written containing all exported symbols that were not
defined in the kernel. defined in the kernel.
--- 6.3 Symbols From Another External Module 6.3 Symbols From Another External Module
----------------------------------------
Sometimes, an external module uses exported symbols from Sometimes, an external module uses exported symbols from
another external module. kbuild needs to have full knowledge of another external module. Kbuild needs to have full knowledge of
all symbols to avoid spitting out warnings about undefined all symbols to avoid spitting out warnings about undefined
symbols. Three solutions exist for this situation. symbols. Three solutions exist for this situation.
@ -521,7 +522,7 @@ build.
The top-level kbuild file would then look like:: The top-level kbuild file would then look like::
#./Kbuild (or ./Makefile): #./Kbuild (or ./Makefile):
obj-y := foo/ bar/ obj-m := foo/ bar/
And executing:: And executing::

View File

@ -16,16 +16,21 @@ the kernel may be unreproducible, and how to avoid them.
Timestamps Timestamps
---------- ----------
The kernel embeds a timestamp in two places: The kernel embeds timestamps in three places:
* The version string exposed by ``uname()`` and included in * The version string exposed by ``uname()`` and included in
``/proc/version`` ``/proc/version``
* File timestamps in the embedded initramfs * File timestamps in the embedded initramfs
By default the timestamp is the current time. This must be overridden * If enabled via ``CONFIG_IKHEADERS``, file timestamps of kernel
using the `KBUILD_BUILD_TIMESTAMP`_ variable. If you are building headers embedded in the kernel or respective module,
from a git commit, you could use its commit date. exposed via ``/sys/kernel/kheaders.tar.xz``
By default the timestamp is the current time and in the case of
``kheaders`` the various files' modification times. This must
be overridden using the `KBUILD_BUILD_TIMESTAMP`_ variable.
If you are building from a git commit, you could use its commit date.
The kernel does *not* use the ``__DATE__`` and ``__TIME__`` macros, The kernel does *not* use the ``__DATE__`` and ``__TIME__`` macros,
and enables warnings if they are used. If you incorporate external and enables warnings if they are used. If you incorporate external

View File

@ -23,6 +23,7 @@ Contents:
intel/ice intel/ice
google/gve google/gve
mellanox/mlx5 mellanox/mlx5
netronome/nfp
pensando/ionic pensando/ionic
.. only:: subproject and html .. only:: subproject and html

View File

@ -272,7 +272,7 @@ supported flags are:
* MSG_DONTWAIT, i.e. non-blocking operation. * MSG_DONTWAIT, i.e. non-blocking operation.
recvmsg(2) recvmsg(2)
^^^^^^^^^ ^^^^^^^^^^
In most cases recvmsg(2) is needed if you want to extract more information than In most cases recvmsg(2) is needed if you want to extract more information than
recvfrom(2) can provide. For example package priority and timestamp. The recvfrom(2) can provide. For example package priority and timestamp. The

View File

@ -6104,7 +6104,10 @@ M: Gao Xiang <gaoxiang25@huawei.com>
M: Chao Yu <yuchao0@huawei.com> M: Chao Yu <yuchao0@huawei.com>
L: linux-erofs@lists.ozlabs.org L: linux-erofs@lists.ozlabs.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git
F: Documentation/filesystems/erofs.txt
F: fs/erofs/ F: fs/erofs/
F: include/trace/events/erofs.h
ERRSEQ ERROR TRACKING INFRASTRUCTURE ERRSEQ ERROR TRACKING INFRASTRUCTURE
M: Jeff Layton <jlayton@kernel.org> M: Jeff Layton <jlayton@kernel.org>
@ -9067,6 +9070,7 @@ F: security/keys/
KGDB / KDB /debug_core KGDB / KDB /debug_core
M: Jason Wessel <jason.wessel@windriver.com> M: Jason Wessel <jason.wessel@windriver.com>
M: Daniel Thompson <daniel.thompson@linaro.org> M: Daniel Thompson <daniel.thompson@linaro.org>
R: Douglas Anderson <dianders@chromium.org>
W: http://kgdb.wiki.kernel.org/ W: http://kgdb.wiki.kernel.org/
L: kgdb-bugreport@lists.sourceforge.net L: kgdb-bugreport@lists.sourceforge.net
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb.git

View File

@ -2,8 +2,8 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 4 PATCHLEVEL = 4
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc1 EXTRAVERSION = -rc2
NAME = Bobtail Squid NAME = Nesting Opossum
# *DOCUMENTATION* # *DOCUMENTATION*
# To see a list of typical targets execute "make help" # To see a list of typical targets execute "make help"
@ -206,24 +206,8 @@ ifndef KBUILD_CHECKSRC
KBUILD_CHECKSRC = 0 KBUILD_CHECKSRC = 0
endif endif
# Use make M=dir to specify directory of external module to build # Use make M=dir or set the environment variable KBUILD_EXTMOD to specify the
# Old syntax make ... SUBDIRS=$PWD is still supported # directory of external module to build. Setting M= takes precedence.
# Setting the environment variable KBUILD_EXTMOD take precedence
ifdef SUBDIRS
$(warning ================= WARNING ================)
$(warning 'SUBDIRS' will be removed after Linux 5.3)
$(warning )
$(warning If you are building an individual subdirectory)
$(warning in the kernel tree, you can do like this:)
$(warning $$ make path/to/dir/you/want/to/build/)
$(warning (Do not forget the trailing slash))
$(warning )
$(warning If you are building an external module,)
$(warning Please use 'M=' or 'KBUILD_EXTMOD' instead)
$(warning ==========================================)
KBUILD_EXTMOD ?= $(SUBDIRS)
endif
ifeq ("$(origin M)", "command line") ifeq ("$(origin M)", "command line")
KBUILD_EXTMOD := $(M) KBUILD_EXTMOD := $(M)
endif endif
@ -498,7 +482,6 @@ export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
export KBUILD_ARFLAGS
# Files to ignore in find ... statements # Files to ignore in find ... statements
@ -914,9 +897,6 @@ ifdef CONFIG_RETPOLINE
KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none) KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
endif endif
# use the deterministic mode of AR if available
KBUILD_ARFLAGS := $(call ar-option,D)
include scripts/Makefile.kasan include scripts/Makefile.kasan
include scripts/Makefile.extrawarn include scripts/Makefile.extrawarn
include scripts/Makefile.ubsan include scripts/Makefile.ubsan

View File

@ -432,7 +432,7 @@
pinctrl-0 = <&mmc0_pins_default>; pinctrl-0 = <&mmc0_pins_default>;
}; };
&gpio0 { &gpio0_target {
/* Do not idle the GPIO used for holding the VTT regulator */ /* Do not idle the GPIO used for holding the VTT regulator */
ti,no-reset-on-init; ti,no-reset-on-init;
ti,no-idle-on-init; ti,no-idle-on-init;

View File

@ -127,7 +127,7 @@
ranges = <0x0 0x5000 0x1000>; ranges = <0x0 0x5000 0x1000>;
}; };
target-module@7000 { /* 0x44e07000, ap 14 20.0 */ gpio0_target: target-module@7000 { /* 0x44e07000, ap 14 20.0 */
compatible = "ti,sysc-omap2", "ti,sysc"; compatible = "ti,sysc-omap2", "ti,sysc";
ti,hwmods = "gpio1"; ti,hwmods = "gpio1";
reg = <0x7000 0x4>, reg = <0x7000 0x4>,
@ -2038,7 +2038,9 @@
reg = <0xe000 0x4>, reg = <0xe000 0x4>,
<0xe054 0x4>; <0xe054 0x4>;
reg-names = "rev", "sysc"; reg-names = "rev", "sysc";
ti,sysc-midle ; ti,sysc-midle = <SYSC_IDLE_FORCE>,
<SYSC_IDLE_NO>,
<SYSC_IDLE_SMART>;
ti,sysc-sidle = <SYSC_IDLE_FORCE>, ti,sysc-sidle = <SYSC_IDLE_FORCE>,
<SYSC_IDLE_NO>, <SYSC_IDLE_NO>,
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;

View File

@ -337,6 +337,8 @@
ti,hwmods = "dss_dispc"; ti,hwmods = "dss_dispc";
clocks = <&disp_clk>; clocks = <&disp_clk>;
clock-names = "fck"; clock-names = "fck";
max-memory-bandwidth = <230000000>;
}; };
rfbi: rfbi@4832a800 { rfbi: rfbi@4832a800 {

View File

@ -2732,7 +2732,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 129 1>, <&edma_xbar 128 1>; dmas = <&edma_xbar 129 1>, <&edma_xbar 128 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 22>, clocks = <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 0>,
<&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 24>, <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 24>,
<&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 28>; <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 28>;
clock-names = "fck", "ahclkx", "ahclkr"; clock-names = "fck", "ahclkx", "ahclkr";
@ -2768,8 +2768,8 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 131 1>, <&edma_xbar 130 1>; dmas = <&edma_xbar 131 1>, <&edma_xbar 130 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 24>, <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 24>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 28>; <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 28>;
clock-names = "fck", "ahclkx", "ahclkr"; clock-names = "fck", "ahclkx", "ahclkr";
status = "disabled"; status = "disabled";
@ -2786,9 +2786,8 @@
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;
/* Domains (P, C): l4per_pwrdm, l4per2_clkdm */ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 0>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>, <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>;
<&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 28>; clock-names = "fck", "ahclkx";
clock-names = "fck", "ahclkx", "ahclkr";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x0 0x68000 0x2000>, ranges = <0x0 0x68000 0x2000>,
@ -2804,7 +2803,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 133 1>, <&edma_xbar 132 1>; dmas = <&edma_xbar 133 1>, <&edma_xbar 132 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>; <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>;
clock-names = "fck", "ahclkx"; clock-names = "fck", "ahclkx";
status = "disabled"; status = "disabled";
@ -2821,9 +2820,8 @@
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;
/* Domains (P, C): l4per_pwrdm, l4per2_clkdm */ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 0>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>, <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>;
<&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 28>; clock-names = "fck", "ahclkx";
clock-names = "fck", "ahclkx", "ahclkr";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x0 0x6c000 0x2000>, ranges = <0x0 0x6c000 0x2000>,
@ -2839,7 +2837,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 135 1>, <&edma_xbar 134 1>; dmas = <&edma_xbar 135 1>, <&edma_xbar 134 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>; <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>;
clock-names = "fck", "ahclkx"; clock-names = "fck", "ahclkx";
status = "disabled"; status = "disabled";
@ -2856,9 +2854,8 @@
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;
/* Domains (P, C): l4per_pwrdm, l4per2_clkdm */ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 0>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>, <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>;
<&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 28>; clock-names = "fck", "ahclkx";
clock-names = "fck", "ahclkx", "ahclkr";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x0 0x70000 0x2000>, ranges = <0x0 0x70000 0x2000>,
@ -2874,7 +2871,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 137 1>, <&edma_xbar 136 1>; dmas = <&edma_xbar 137 1>, <&edma_xbar 136 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>; <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>;
clock-names = "fck", "ahclkx"; clock-names = "fck", "ahclkx";
status = "disabled"; status = "disabled";
@ -2891,9 +2888,8 @@
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;
/* Domains (P, C): l4per_pwrdm, l4per2_clkdm */ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 0>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>, <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>;
<&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 28>; clock-names = "fck", "ahclkx";
clock-names = "fck", "ahclkx", "ahclkr";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x0 0x74000 0x2000>, ranges = <0x0 0x74000 0x2000>,
@ -2909,7 +2905,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 139 1>, <&edma_xbar 138 1>; dmas = <&edma_xbar 139 1>, <&edma_xbar 138 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>; <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>;
clock-names = "fck", "ahclkx"; clock-names = "fck", "ahclkx";
status = "disabled"; status = "disabled";
@ -2926,9 +2922,8 @@
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;
/* Domains (P, C): l4per_pwrdm, l4per2_clkdm */ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 0>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>, <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>;
<&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 28>; clock-names = "fck", "ahclkx";
clock-names = "fck", "ahclkx", "ahclkr";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x0 0x78000 0x2000>, ranges = <0x0 0x78000 0x2000>,
@ -2944,7 +2939,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 141 1>, <&edma_xbar 140 1>; dmas = <&edma_xbar 141 1>, <&edma_xbar 140 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>; <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>;
clock-names = "fck", "ahclkx"; clock-names = "fck", "ahclkx";
status = "disabled"; status = "disabled";
@ -2961,9 +2956,8 @@
<SYSC_IDLE_SMART>; <SYSC_IDLE_SMART>;
/* Domains (P, C): l4per_pwrdm, l4per2_clkdm */ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 0>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>, <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>;
<&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 28>; clock-names = "fck", "ahclkx";
clock-names = "fck", "ahclkx", "ahclkr";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x0 0x7c000 0x2000>, ranges = <0x0 0x7c000 0x2000>,
@ -2979,7 +2973,7 @@
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&edma_xbar 143 1>, <&edma_xbar 142 1>; dmas = <&edma_xbar 143 1>, <&edma_xbar 142 1>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 22>, clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 0>,
<&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>; <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>;
clock-names = "fck", "ahclkx"; clock-names = "fck", "ahclkx";
status = "disabled"; status = "disabled";

View File

@ -124,6 +124,7 @@
spi-max-frequency = <100000>; spi-max-frequency = <100000>;
spi-cpol; spi-cpol;
spi-cpha; spi-cpha;
spi-cs-high;
backlight= <&backlight>; backlight= <&backlight>;
label = "lcd"; label = "lcd";

View File

@ -8,6 +8,7 @@
#include <dt-bindings/mfd/dbx500-prcmu.h> #include <dt-bindings/mfd/dbx500-prcmu.h>
#include <dt-bindings/arm/ux500_pm_domains.h> #include <dt-bindings/arm/ux500_pm_domains.h>
#include <dt-bindings/gpio/gpio.h> #include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/thermal/thermal.h>
/ { / {
#address-cells = <1>; #address-cells = <1>;
@ -59,8 +60,12 @@
* cooling. * cooling.
*/ */
cpu_thermal: cpu-thermal { cpu_thermal: cpu-thermal {
polling-delay-passive = <0>; polling-delay-passive = <250>;
polling-delay = <1000>; /*
* This sensor fires interrupts to update the thermal
* zone, so no polling is needed.
*/
polling-delay = <0>;
thermal-sensors = <&thermal>; thermal-sensors = <&thermal>;
@ -79,7 +84,7 @@
cooling-maps { cooling-maps {
trip = <&cpu_alert>; trip = <&cpu_alert>;
cooling-device = <&CPU0 0 2>; cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
contribution = <100>; contribution = <100>;
}; };
}; };

View File

@ -228,7 +228,7 @@ CONFIG_RTC_DRV_OMAP=m
CONFIG_DMADEVICES=y CONFIG_DMADEVICES=y
CONFIG_TI_EDMA=y CONFIG_TI_EDMA=y
CONFIG_COMMON_CLK_PWM=m CONFIG_COMMON_CLK_PWM=m
CONFIG_REMOTEPROC=m CONFIG_REMOTEPROC=y
CONFIG_DA8XX_REMOTEPROC=m CONFIG_DA8XX_REMOTEPROC=m
CONFIG_MEMORY=y CONFIG_MEMORY=y
CONFIG_TI_AEMIF=m CONFIG_TI_AEMIF=m

View File

@ -415,7 +415,7 @@ CONFIG_SPI_SH_MSIOF=m
CONFIG_SPI_SH_HSPI=y CONFIG_SPI_SH_HSPI=y
CONFIG_SPI_SIRF=y CONFIG_SPI_SIRF=y
CONFIG_SPI_STM32=m CONFIG_SPI_STM32=m
CONFIG_SPI_STM32_QSPI=m CONFIG_SPI_STM32_QSPI=y
CONFIG_SPI_SUN4I=y CONFIG_SPI_SUN4I=y
CONFIG_SPI_SUN6I=y CONFIG_SPI_SUN6I=y
CONFIG_SPI_TEGRA114=y CONFIG_SPI_TEGRA114=y
@ -933,7 +933,7 @@ CONFIG_BCM2835_MBOX=y
CONFIG_ROCKCHIP_IOMMU=y CONFIG_ROCKCHIP_IOMMU=y
CONFIG_TEGRA_IOMMU_GART=y CONFIG_TEGRA_IOMMU_GART=y
CONFIG_TEGRA_IOMMU_SMMU=y CONFIG_TEGRA_IOMMU_SMMU=y
CONFIG_REMOTEPROC=m CONFIG_REMOTEPROC=y
CONFIG_ST_REMOTEPROC=m CONFIG_ST_REMOTEPROC=m
CONFIG_RPMSG_VIRTIO=m CONFIG_RPMSG_VIRTIO=m
CONFIG_ASPEED_LPC_CTRL=m CONFIG_ASPEED_LPC_CTRL=m

View File

@ -364,6 +364,7 @@ CONFIG_DRM_OMAP_PANEL_TPO_TD043MTEA1=m
CONFIG_DRM_OMAP_PANEL_NEC_NL8048HL11=m CONFIG_DRM_OMAP_PANEL_NEC_NL8048HL11=m
CONFIG_DRM_TILCDC=m CONFIG_DRM_TILCDC=m
CONFIG_DRM_PANEL_SIMPLE=m CONFIG_DRM_PANEL_SIMPLE=m
CONFIG_DRM_TI_TFP410=m
CONFIG_FB=y CONFIG_FB=y
CONFIG_FIRMWARE_EDID=y CONFIG_FIRMWARE_EDID=y
CONFIG_FB_MODE_HELPERS=y CONFIG_FB_MODE_HELPERS=y
@ -423,6 +424,7 @@ CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_SIMPLE=m CONFIG_USB_SERIAL_SIMPLE=m
CONFIG_USB_SERIAL_FTDI_SIO=m CONFIG_USB_SERIAL_FTDI_SIO=m
CONFIG_USB_SERIAL_PL2303=m CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_OPTION=m
CONFIG_USB_TEST=m CONFIG_USB_TEST=m
CONFIG_NOP_USB_XCEIV=m CONFIG_NOP_USB_XCEIV=m
CONFIG_AM335X_PHY_USB=m CONFIG_AM335X_PHY_USB=m
@ -460,6 +462,7 @@ CONFIG_MMC_SDHCI_OMAP=y
CONFIG_NEW_LEDS=y CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=m CONFIG_LEDS_CLASS=m
CONFIG_LEDS_CPCAP=m CONFIG_LEDS_CPCAP=m
CONFIG_LEDS_LM3532=m
CONFIG_LEDS_GPIO=m CONFIG_LEDS_GPIO=m
CONFIG_LEDS_PCA963X=m CONFIG_LEDS_PCA963X=m
CONFIG_LEDS_PWM=m CONFIG_LEDS_PWM=m
@ -481,7 +484,7 @@ CONFIG_RTC_DRV_OMAP=m
CONFIG_RTC_DRV_CPCAP=m CONFIG_RTC_DRV_CPCAP=m
CONFIG_DMADEVICES=y CONFIG_DMADEVICES=y
CONFIG_OMAP_IOMMU=y CONFIG_OMAP_IOMMU=y
CONFIG_REMOTEPROC=m CONFIG_REMOTEPROC=y
CONFIG_OMAP_REMOTEPROC=m CONFIG_OMAP_REMOTEPROC=m
CONFIG_WKUP_M3_RPROC=m CONFIG_WKUP_M3_RPROC=m
CONFIG_SOC_TI=y CONFIG_SOC_TI=y

View File

@ -1,6 +0,0 @@
#ifndef _ASM_XEN_OPS_H
#define _ASM_XEN_OPS_H
void xen_efi_runtime_setup(void);
#endif /* _ASM_XEN_OPS_H */

View File

@ -763,7 +763,8 @@ static struct omap_hwmod_class_sysconfig am33xx_timer_sysc = {
.rev_offs = 0x0000, .rev_offs = 0x0000,
.sysc_offs = 0x0010, .sysc_offs = 0x0010,
.syss_offs = 0x0014, .syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET), .sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
SYSC_HAS_RESET_STATUS,
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP), SIDLE_SMART_WKUP),
.sysc_fields = &omap_hwmod_sysc_type2, .sysc_fields = &omap_hwmod_sysc_type2,

View File

@ -231,8 +231,9 @@ static struct omap_hwmod am33xx_control_hwmod = {
static struct omap_hwmod_class_sysconfig lcdc_sysc = { static struct omap_hwmod_class_sysconfig lcdc_sysc = {
.rev_offs = 0x0, .rev_offs = 0x0,
.sysc_offs = 0x54, .sysc_offs = 0x54,
.sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE), .sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE,
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART), .idlemodes = SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART,
.sysc_fields = &omap_hwmod_sysc_type2, .sysc_fields = &omap_hwmod_sysc_type2,
}; };

View File

@ -74,83 +74,6 @@ int omap_pm_clkdms_setup(struct clockdomain *clkdm, void *unused)
return 0; return 0;
} }
/*
* This API is to be called during init to set the various voltage
* domains to the voltage as per the opp table. Typically we boot up
* at the nominal voltage. So this function finds out the rate of
* the clock associated with the voltage domain, finds out the correct
* opp entry and sets the voltage domain to the voltage specified
* in the opp entry
*/
static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
const char *oh_name)
{
struct voltagedomain *voltdm;
struct clk *clk;
struct dev_pm_opp *opp;
unsigned long freq, bootup_volt;
struct device *dev;
if (!vdd_name || !clk_name || !oh_name) {
pr_err("%s: invalid parameters\n", __func__);
goto exit;
}
if (!strncmp(oh_name, "mpu", 3))
/*
* All current OMAPs share voltage rail and clock
* source, so CPU0 is used to represent the MPU-SS.
*/
dev = get_cpu_device(0);
else
dev = omap_device_get_by_hwmod_name(oh_name);
if (IS_ERR(dev)) {
pr_err("%s: Unable to get dev pointer for hwmod %s\n",
__func__, oh_name);
goto exit;
}
voltdm = voltdm_lookup(vdd_name);
if (!voltdm) {
pr_err("%s: unable to get vdd pointer for vdd_%s\n",
__func__, vdd_name);
goto exit;
}
clk = clk_get(NULL, clk_name);
if (IS_ERR(clk)) {
pr_err("%s: unable to get clk %s\n", __func__, clk_name);
goto exit;
}
freq = clk_get_rate(clk);
clk_put(clk);
opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) {
pr_err("%s: unable to find boot up OPP for vdd_%s\n",
__func__, vdd_name);
goto exit;
}
bootup_volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
if (!bootup_volt) {
pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",
__func__, vdd_name);
goto exit;
}
voltdm_scale(voltdm, bootup_volt);
return 0;
exit:
pr_err("%s: unable to set vdd_%s\n", __func__, vdd_name);
return -EINVAL;
}
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
static int omap_pm_enter(suspend_state_t suspend_state) static int omap_pm_enter(suspend_state_t suspend_state)
{ {
@ -208,25 +131,6 @@ void omap_common_suspend_init(void *pm_suspend)
} }
#endif /* CONFIG_SUSPEND */ #endif /* CONFIG_SUSPEND */
static void __init omap3_init_voltages(void)
{
if (!soc_is_omap34xx())
return;
omap2_set_init_voltage("mpu_iva", "dpll1_ck", "mpu");
omap2_set_init_voltage("core", "l3_ick", "l3_main");
}
static void __init omap4_init_voltages(void)
{
if (!soc_is_omap44xx())
return;
omap2_set_init_voltage("mpu", "dpll_mpu_ck", "mpu");
omap2_set_init_voltage("core", "l3_div_ck", "l3_main_1");
omap2_set_init_voltage("iva", "dpll_iva_m5x2_ck", "iva");
}
int __maybe_unused omap_pm_nop_init(void) int __maybe_unused omap_pm_nop_init(void)
{ {
return 0; return 0;
@ -246,10 +150,6 @@ int __init omap2_common_pm_late_init(void)
omap4_twl_init(); omap4_twl_init();
omap_voltage_late_init(); omap_voltage_late_init();
/* Initialize the voltages */
omap3_init_voltages();
omap4_init_voltages();
/* Smartreflex device init */ /* Smartreflex device init */
omap_devinit_smartreflex(); omap_devinit_smartreflex();

View File

@ -1,3 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o
obj-$(CONFIG_XEN_EFI) += efi.o

View File

@ -1,28 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (c) 2015, Linaro Limited, Shannon Zhao
*/
#include <linux/efi.h>
#include <xen/xen-ops.h>
#include <asm/xen/xen-ops.h>
/* Set XEN EFI runtime services function pointers. Other fields of struct efi,
* e.g. efi.systab, will be set like normal EFI.
*/
void __init xen_efi_runtime_setup(void)
{
efi.get_time = xen_efi_get_time;
efi.set_time = xen_efi_set_time;
efi.get_wakeup_time = xen_efi_get_wakeup_time;
efi.set_wakeup_time = xen_efi_set_wakeup_time;
efi.get_variable = xen_efi_get_variable;
efi.get_next_variable = xen_efi_get_next_variable;
efi.set_variable = xen_efi_set_variable;
efi.query_variable_info = xen_efi_query_variable_info;
efi.update_capsule = xen_efi_update_capsule;
efi.query_capsule_caps = xen_efi_query_capsule_caps;
efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
efi.reset_system = xen_efi_reset_system;
}
EXPORT_SYMBOL_GPL(xen_efi_runtime_setup);

View File

@ -15,7 +15,6 @@
#include <xen/xen-ops.h> #include <xen/xen-ops.h>
#include <asm/xen/hypervisor.h> #include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h> #include <asm/xen/hypercall.h>
#include <asm/xen/xen-ops.h>
#include <asm/system_misc.h> #include <asm/system_misc.h>
#include <asm/efi.h> #include <asm/efi.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
@ -437,7 +436,7 @@ EXPORT_SYMBOL_GPL(HYPERVISOR_memory_op);
EXPORT_SYMBOL_GPL(HYPERVISOR_physdev_op); EXPORT_SYMBOL_GPL(HYPERVISOR_physdev_op);
EXPORT_SYMBOL_GPL(HYPERVISOR_vcpu_op); EXPORT_SYMBOL_GPL(HYPERVISOR_vcpu_op);
EXPORT_SYMBOL_GPL(HYPERVISOR_tmem_op); EXPORT_SYMBOL_GPL(HYPERVISOR_tmem_op);
EXPORT_SYMBOL_GPL(HYPERVISOR_platform_op); EXPORT_SYMBOL_GPL(HYPERVISOR_platform_op_raw);
EXPORT_SYMBOL_GPL(HYPERVISOR_multicall); EXPORT_SYMBOL_GPL(HYPERVISOR_multicall);
EXPORT_SYMBOL_GPL(HYPERVISOR_vm_assist); EXPORT_SYMBOL_GPL(HYPERVISOR_vm_assist);
EXPORT_SYMBOL_GPL(HYPERVISOR_dm_op); EXPORT_SYMBOL_GPL(HYPERVISOR_dm_op);

View File

@ -28,7 +28,10 @@ unsigned long xen_get_swiotlb_free_pages(unsigned int order)
for_each_memblock(memory, reg) { for_each_memblock(memory, reg) {
if (reg->base < (phys_addr_t)0xffffffff) { if (reg->base < (phys_addr_t)0xffffffff) {
flags |= __GFP_DMA; if (IS_ENABLED(CONFIG_ZONE_DMA32))
flags |= __GFP_DMA32;
else
flags |= __GFP_DMA;
break; break;
} }
} }

View File

@ -723,7 +723,7 @@ CONFIG_TEGRA_IOMMU_SMMU=y
CONFIG_ARM_SMMU=y CONFIG_ARM_SMMU=y
CONFIG_ARM_SMMU_V3=y CONFIG_ARM_SMMU_V3=y
CONFIG_QCOM_IOMMU=y CONFIG_QCOM_IOMMU=y
CONFIG_REMOTEPROC=m CONFIG_REMOTEPROC=y
CONFIG_QCOM_Q6V5_MSS=m CONFIG_QCOM_Q6V5_MSS=m
CONFIG_QCOM_Q6V5_PAS=m CONFIG_QCOM_Q6V5_PAS=m
CONFIG_QCOM_SYSMON=m CONFIG_QCOM_SYSMON=m

View File

@ -47,30 +47,6 @@
#define read_sysreg_el2(r) read_sysreg_elx(r, _EL2, _EL1) #define read_sysreg_el2(r) read_sysreg_elx(r, _EL2, _EL1)
#define write_sysreg_el2(v,r) write_sysreg_elx(v, r, _EL2, _EL1) #define write_sysreg_el2(v,r) write_sysreg_elx(v, r, _EL2, _EL1)
/**
* hyp_alternate_select - Generates patchable code sequences that are
* used to switch between two implementations of a function, depending
* on the availability of a feature.
*
* @fname: a symbol name that will be defined as a function returning a
* function pointer whose type will match @orig and @alt
* @orig: A pointer to the default function, as returned by @fname when
* @cond doesn't hold
* @alt: A pointer to the alternate function, as returned by @fname
* when @cond holds
* @cond: a CPU feature (as described in asm/cpufeature.h)
*/
#define hyp_alternate_select(fname, orig, alt, cond) \
typeof(orig) * __hyp_text fname(void) \
{ \
typeof(alt) *val = orig; \
asm volatile(ALTERNATIVE("nop \n", \
"mov %0, %1 \n", \
cond) \
: "+r" (val) : "r" (alt)); \
return val; \
}
int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu); int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
void __vgic_v3_save_state(struct kvm_vcpu *vcpu); void __vgic_v3_save_state(struct kvm_vcpu *vcpu);

View File

@ -1,7 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_XEN_OPS_H
#define _ASM_XEN_OPS_H
void xen_efi_runtime_setup(void);
#endif /* _ASM_XEN_OPS_H */

View File

@ -229,20 +229,6 @@ static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
} }
} }
static bool __hyp_text __true_value(void)
{
return true;
}
static bool __hyp_text __false_value(void)
{
return false;
}
static hyp_alternate_select(__check_arm_834220,
__false_value, __true_value,
ARM64_WORKAROUND_834220);
static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
{ {
u64 par, tmp; u64 par, tmp;
@ -298,7 +284,8 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
* resolve the IPA using the AT instruction. * resolve the IPA using the AT instruction.
*/ */
if (!(esr & ESR_ELx_S1PTW) && if (!(esr & ESR_ELx_S1PTW) &&
(__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { (cpus_have_const_cap(ARM64_WORKAROUND_834220) ||
(esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
if (!__translate_far_to_hpfar(far, &hpfar)) if (!__translate_far_to_hpfar(far, &hpfar))
return false; return false;
} else { } else {

View File

@ -67,10 +67,14 @@ static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
isb(); isb();
} }
static hyp_alternate_select(__tlb_switch_to_guest, static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm,
__tlb_switch_to_guest_nvhe, struct tlb_inv_context *cxt)
__tlb_switch_to_guest_vhe, {
ARM64_HAS_VIRT_HOST_EXTN); if (has_vhe())
__tlb_switch_to_guest_vhe(kvm, cxt);
else
__tlb_switch_to_guest_nvhe(kvm, cxt);
}
static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
struct tlb_inv_context *cxt) struct tlb_inv_context *cxt)
@ -98,10 +102,14 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
write_sysreg(0, vttbr_el2); write_sysreg(0, vttbr_el2);
} }
static hyp_alternate_select(__tlb_switch_to_host, static void __hyp_text __tlb_switch_to_host(struct kvm *kvm,
__tlb_switch_to_host_nvhe, struct tlb_inv_context *cxt)
__tlb_switch_to_host_vhe, {
ARM64_HAS_VIRT_HOST_EXTN); if (has_vhe())
__tlb_switch_to_host_vhe(kvm, cxt);
else
__tlb_switch_to_host_nvhe(kvm, cxt);
}
void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{ {
@ -111,7 +119,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
/* Switch to requested VMID */ /* Switch to requested VMID */
kvm = kern_hyp_va(kvm); kvm = kern_hyp_va(kvm);
__tlb_switch_to_guest()(kvm, &cxt); __tlb_switch_to_guest(kvm, &cxt);
/* /*
* We could do so much better if we had the VA as well. * We could do so much better if we had the VA as well.
@ -154,7 +162,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
if (!has_vhe() && icache_is_vpipt()) if (!has_vhe() && icache_is_vpipt())
__flush_icache_all(); __flush_icache_all();
__tlb_switch_to_host()(kvm, &cxt); __tlb_switch_to_host(kvm, &cxt);
} }
void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
@ -165,13 +173,13 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
/* Switch to requested VMID */ /* Switch to requested VMID */
kvm = kern_hyp_va(kvm); kvm = kern_hyp_va(kvm);
__tlb_switch_to_guest()(kvm, &cxt); __tlb_switch_to_guest(kvm, &cxt);
__tlbi(vmalls12e1is); __tlbi(vmalls12e1is);
dsb(ish); dsb(ish);
isb(); isb();
__tlb_switch_to_host()(kvm, &cxt); __tlb_switch_to_host(kvm, &cxt);
} }
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
@ -180,13 +188,13 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
struct tlb_inv_context cxt; struct tlb_inv_context cxt;
/* Switch to requested VMID */ /* Switch to requested VMID */
__tlb_switch_to_guest()(kvm, &cxt); __tlb_switch_to_guest(kvm, &cxt);
__tlbi(vmalle1); __tlbi(vmalle1);
dsb(nsh); dsb(nsh);
isb(); isb();
__tlb_switch_to_host()(kvm, &cxt); __tlb_switch_to_host(kvm, &cxt);
} }
void __hyp_text __kvm_flush_vm_context(void) void __hyp_text __kvm_flush_vm_context(void)

View File

@ -1,4 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
xen-arm-y += $(addprefix ../../arm/xen/, enlighten.o grant-table.o p2m.o mm.o) xen-arm-y += $(addprefix ../../arm/xen/, enlighten.o grant-table.o p2m.o mm.o)
obj-y := xen-arm.o hypercall.o obj-y := xen-arm.o hypercall.o
obj-$(CONFIG_XEN_EFI) += $(addprefix ../../arm/xen/, efi.o)

View File

@ -99,7 +99,7 @@
miscintc: interrupt-controller@18060010 { miscintc: interrupt-controller@18060010 {
compatible = "qca,ar7240-misc-intc"; compatible = "qca,ar7240-misc-intc";
reg = <0x18060010 0x4>; reg = <0x18060010 0x8>;
interrupt-parent = <&cpuintc>; interrupt-parent = <&cpuintc>;
interrupts = <6>; interrupts = <6>;

View File

@ -160,7 +160,6 @@ void __init prom_meminit(void)
void __init prom_free_prom_memory(void) void __init prom_free_prom_memory(void)
{ {
unsigned long addr;
int i; int i;
if (prom_flags & PROM_FLAG_DONT_FREE_TEMP) if (prom_flags & PROM_FLAG_DONT_FREE_TEMP)

View File

@ -36,6 +36,7 @@
#include <asm/octeon/octeon-feature.h> #include <asm/octeon/octeon-feature.h>
#include <asm/octeon/cvmx-ipd-defs.h> #include <asm/octeon/cvmx-ipd-defs.h>
#include <asm/octeon/cvmx-pip-defs.h>
enum cvmx_ipd_mode { enum cvmx_ipd_mode {
CVMX_IPD_OPC_MODE_STT = 0LL, /* All blocks DRAM, not cached in L2 */ CVMX_IPD_OPC_MODE_STT = 0LL, /* All blocks DRAM, not cached in L2 */

View File

@ -52,6 +52,7 @@
# endif # endif
#define __ARCH_WANT_SYS_FORK #define __ARCH_WANT_SYS_FORK
#define __ARCH_WANT_SYS_CLONE #define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_SYS_CLONE3
/* whitelists for checksyscalls */ /* whitelists for checksyscalls */
#define __IGNORE_fadvise64_64 #define __IGNORE_fadvise64_64

View File

@ -24,7 +24,8 @@ static char r4kwar[] __initdata =
static char daddiwar[] __initdata = static char daddiwar[] __initdata =
"Enable CPU_DADDI_WORKAROUNDS to rectify."; "Enable CPU_DADDI_WORKAROUNDS to rectify.";
static inline void align_mod(const int align, const int mod) static __always_inline __init
void align_mod(const int align, const int mod)
{ {
asm volatile( asm volatile(
".set push\n\t" ".set push\n\t"
@ -38,8 +39,9 @@ static inline void align_mod(const int align, const int mod)
: "n"(align), "n"(mod)); : "n"(align), "n"(mod));
} }
static __always_inline void mult_sh_align_mod(long *v1, long *v2, long *w, static __always_inline __init
const int align, const int mod) void mult_sh_align_mod(long *v1, long *v2, long *w,
const int align, const int mod)
{ {
unsigned long flags; unsigned long flags;
int m1, m2; int m1, m2;
@ -113,7 +115,7 @@ static __always_inline void mult_sh_align_mod(long *v1, long *v2, long *w,
*w = lw; *w = lw;
} }
static inline void check_mult_sh(void) static __always_inline __init void check_mult_sh(void)
{ {
long v1[8], v2[8], w[8]; long v1[8], v2[8], w[8];
int bug, fix, i; int bug, fix, i;
@ -176,7 +178,7 @@ asmlinkage void __init do_daddi_ov(struct pt_regs *regs)
exception_exit(prev_state); exception_exit(prev_state);
} }
static inline void check_daddi(void) static __init void check_daddi(void)
{ {
extern asmlinkage void handle_daddi_ov(void); extern asmlinkage void handle_daddi_ov(void);
unsigned long flags; unsigned long flags;
@ -242,7 +244,7 @@ static inline void check_daddi(void)
int daddiu_bug = IS_ENABLED(CONFIG_CPU_MIPSR6) ? 0 : -1; int daddiu_bug = IS_ENABLED(CONFIG_CPU_MIPSR6) ? 0 : -1;
static inline void check_daddiu(void) static __init void check_daddiu(void)
{ {
long v, w, tmp; long v, w, tmp;

View File

@ -108,6 +108,9 @@ void __init add_memory_region(phys_addr_t start, phys_addr_t size, long type)
return; return;
} }
if (start < PHYS_OFFSET)
return;
memblock_add(start, size); memblock_add(start, size);
/* Reserve any memory except the ordinary RAM ranges. */ /* Reserve any memory except the ordinary RAM ranges. */
switch (type) { switch (type) {
@ -321,7 +324,7 @@ static void __init bootmem_init(void)
* Reserve any memory between the start of RAM and PHYS_OFFSET * Reserve any memory between the start of RAM and PHYS_OFFSET
*/ */
if (ramstart > PHYS_OFFSET) if (ramstart > PHYS_OFFSET)
memblock_reserve(PHYS_OFFSET, PFN_UP(ramstart) - PHYS_OFFSET); memblock_reserve(PHYS_OFFSET, ramstart - PHYS_OFFSET);
if (PFN_UP(ramstart) > ARCH_PFN_OFFSET) { if (PFN_UP(ramstart) > ARCH_PFN_OFFSET) {
pr_info("Wasting %lu bytes for tracking %lu unused pages\n", pr_info("Wasting %lu bytes for tracking %lu unused pages\n",

View File

@ -80,6 +80,7 @@ SYSCALL_DEFINE6(mips_mmap2, unsigned long, addr, unsigned long, len,
save_static_function(sys_fork); save_static_function(sys_fork);
save_static_function(sys_clone); save_static_function(sys_clone);
save_static_function(sys_clone3);
SYSCALL_DEFINE1(set_thread_area, unsigned long, addr) SYSCALL_DEFINE1(set_thread_area, unsigned long, addr)
{ {

View File

@ -373,4 +373,4 @@
432 n32 fsmount sys_fsmount 432 n32 fsmount sys_fsmount
433 n32 fspick sys_fspick 433 n32 fspick sys_fspick
434 n32 pidfd_open sys_pidfd_open 434 n32 pidfd_open sys_pidfd_open
# 435 reserved for clone3 435 n32 clone3 __sys_clone3

View File

@ -349,4 +349,4 @@
432 n64 fsmount sys_fsmount 432 n64 fsmount sys_fsmount
433 n64 fspick sys_fspick 433 n64 fspick sys_fspick
434 n64 pidfd_open sys_pidfd_open 434 n64 pidfd_open sys_pidfd_open
# 435 reserved for clone3 435 n64 clone3 __sys_clone3

View File

@ -422,4 +422,4 @@
432 o32 fsmount sys_fsmount 432 o32 fsmount sys_fsmount
433 o32 fspick sys_fspick 433 o32 fspick sys_fspick
434 o32 pidfd_open sys_pidfd_open 434 o32 pidfd_open sys_pidfd_open
# 435 reserved for clone3 435 o32 clone3 __sys_clone3

View File

@ -3,6 +3,7 @@
*/ */
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/fcntl.h> #include <linux/fcntl.h>
#include <linux/memblock.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <asm/bootinfo.h> #include <asm/bootinfo.h>
@ -64,24 +65,22 @@ void __init prom_init_memory(void)
node_id = loongson_memmap->map[i].node_id; node_id = loongson_memmap->map[i].node_id;
mem_type = loongson_memmap->map[i].mem_type; mem_type = loongson_memmap->map[i].mem_type;
if (node_id == 0) { if (node_id != 0)
switch (mem_type) { continue;
case SYSTEM_RAM_LOW:
add_memory_region(loongson_memmap->map[i].mem_start, switch (mem_type) {
(u64)loongson_memmap->map[i].mem_size << 20, case SYSTEM_RAM_LOW:
BOOT_MEM_RAM); memblock_add(loongson_memmap->map[i].mem_start,
break; (u64)loongson_memmap->map[i].mem_size << 20);
case SYSTEM_RAM_HIGH: break;
add_memory_region(loongson_memmap->map[i].mem_start, case SYSTEM_RAM_HIGH:
(u64)loongson_memmap->map[i].mem_size << 20, memblock_add(loongson_memmap->map[i].mem_start,
BOOT_MEM_RAM); (u64)loongson_memmap->map[i].mem_size << 20);
break; break;
case SYSTEM_RAM_RESERVED: case SYSTEM_RAM_RESERVED:
add_memory_region(loongson_memmap->map[i].mem_start, memblock_reserve(loongson_memmap->map[i].mem_start,
(u64)loongson_memmap->map[i].mem_size << 20, (u64)loongson_memmap->map[i].mem_size << 20);
BOOT_MEM_RESERVED); break;
break;
}
} }
} }
} }

View File

@ -110,7 +110,7 @@ static int __init serial_init(void)
} }
module_init(serial_init); module_init(serial_init);
static void __init serial_exit(void) static void __exit serial_exit(void)
{ {
platform_device_unregister(&uart8250_device); platform_device_unregister(&uart8250_device);
} }

View File

@ -142,8 +142,6 @@ static void __init szmem(unsigned int node)
(u32)node_id, mem_type, mem_start, mem_size); (u32)node_id, mem_type, mem_start, mem_size);
pr_info(" start_pfn:0x%llx, end_pfn:0x%llx, num_physpages:0x%lx\n", pr_info(" start_pfn:0x%llx, end_pfn:0x%llx, num_physpages:0x%lx\n",
start_pfn, end_pfn, num_physpages); start_pfn, end_pfn, num_physpages);
add_memory_region((node_id << 44) + mem_start,
(u64)mem_size << 20, BOOT_MEM_RAM);
memblock_add_node(PFN_PHYS(start_pfn), memblock_add_node(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), node); PFN_PHYS(end_pfn - start_pfn), node);
break; break;
@ -156,16 +154,12 @@ static void __init szmem(unsigned int node)
(u32)node_id, mem_type, mem_start, mem_size); (u32)node_id, mem_type, mem_start, mem_size);
pr_info(" start_pfn:0x%llx, end_pfn:0x%llx, num_physpages:0x%lx\n", pr_info(" start_pfn:0x%llx, end_pfn:0x%llx, num_physpages:0x%lx\n",
start_pfn, end_pfn, num_physpages); start_pfn, end_pfn, num_physpages);
add_memory_region((node_id << 44) + mem_start,
(u64)mem_size << 20, BOOT_MEM_RAM);
memblock_add_node(PFN_PHYS(start_pfn), memblock_add_node(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), node); PFN_PHYS(end_pfn - start_pfn), node);
break; break;
case SYSTEM_RAM_RESERVED: case SYSTEM_RAM_RESERVED:
pr_info("Node%d: mem_type:%d, mem_start:0x%llx, mem_size:0x%llx MB\n", pr_info("Node%d: mem_type:%d, mem_start:0x%llx, mem_size:0x%llx MB\n",
(u32)node_id, mem_type, mem_start, mem_size); (u32)node_id, mem_type, mem_start, mem_size);
add_memory_region((node_id << 44) + mem_start,
(u64)mem_size << 20, BOOT_MEM_RESERVED);
memblock_reserve(((node_id << 44) + mem_start), memblock_reserve(((node_id << 44) + mem_start),
mem_size << 20); mem_size << 20);
break; break;
@ -191,8 +185,6 @@ static void __init node_mem_init(unsigned int node)
NODE_DATA(node)->node_start_pfn = start_pfn; NODE_DATA(node)->node_start_pfn = start_pfn;
NODE_DATA(node)->node_spanned_pages = end_pfn - start_pfn; NODE_DATA(node)->node_spanned_pages = end_pfn - start_pfn;
free_bootmem_with_active_regions(node, end_pfn);
if (node == 0) { if (node == 0) {
/* kernel end address */ /* kernel end address */
unsigned long kernel_end_pfn = PFN_UP(__pa_symbol(&_end)); unsigned long kernel_end_pfn = PFN_UP(__pa_symbol(&_end));
@ -209,8 +201,6 @@ static void __init node_mem_init(unsigned int node)
memblock_reserve((node_addrspace_offset | 0xfe000000), memblock_reserve((node_addrspace_offset | 0xfe000000),
32 << 20); 32 << 20);
} }
sparse_memory_present_with_active_regions(node);
} }
static __init void prom_meminit(void) static __init void prom_meminit(void)
@ -227,6 +217,7 @@ static __init void prom_meminit(void)
cpumask_clear(&__node_data[(node)]->cpumask); cpumask_clear(&__node_data[(node)]->cpumask);
} }
} }
memblocks_present();
max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) { for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {

View File

@ -61,6 +61,7 @@ int init_debug = 1;
/* memory blocks */ /* memory blocks */
struct prom_pmemblock mdesc[PROM_MAX_PMEMBLOCKS]; struct prom_pmemblock mdesc[PROM_MAX_PMEMBLOCKS];
#define MAX_PROM_MEM 5
static phys_addr_t prom_mem_base[MAX_PROM_MEM] __initdata; static phys_addr_t prom_mem_base[MAX_PROM_MEM] __initdata;
static phys_addr_t prom_mem_size[MAX_PROM_MEM] __initdata; static phys_addr_t prom_mem_size[MAX_PROM_MEM] __initdata;
static unsigned int nr_prom_mem __initdata; static unsigned int nr_prom_mem __initdata;
@ -358,7 +359,7 @@ void __init prom_meminit(void)
p++; p++;
if (type == BOOT_MEM_ROM_DATA) { if (type == BOOT_MEM_ROM_DATA) {
if (nr_prom_mem >= 5) { if (nr_prom_mem >= MAX_PROM_MEM) {
pr_err("Too many ROM DATA regions"); pr_err("Too many ROM DATA regions");
continue; continue;
} }
@ -377,7 +378,6 @@ void __init prom_free_prom_memory(void)
char *ptr; char *ptr;
int len = 0; int len = 0;
int i; int i;
unsigned long addr;
/* /*
* preserve environment variables and command line from pmon/bbload * preserve environment variables and command line from pmon/bbload

View File

@ -59,7 +59,7 @@ CFLAGS_REMOVE_vgettimeofday.o = -pg
ifndef CONFIG_CPU_MIPSR6 ifndef CONFIG_CPU_MIPSR6
ifeq ($(call ld-ifversion, -lt, 225000000, y),y) ifeq ($(call ld-ifversion, -lt, 225000000, y),y)
$(warning MIPS VDSO requires binutils >= 2.25) $(warning MIPS VDSO requires binutils >= 2.25)
obj-vdso-y := $(filter-out gettimeofday.o, $(obj-vdso-y)) obj-vdso-y := $(filter-out vgettimeofday.o, $(obj-vdso-y))
ccflags-vdso += -DDISABLE_MIPS_VDSO ccflags-vdso += -DDISABLE_MIPS_VDSO
endif endif
endif endif

View File

@ -1,269 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) 2015 Imagination Technologies
* Author: Alex Smith <alex.smith@imgtec.com>
*/
#include "vdso.h"
#include <linux/compiler.h>
#include <linux/time.h>
#include <asm/clocksource.h>
#include <asm/io.h>
#include <asm/unistd.h>
#include <asm/vdso.h>
#ifdef CONFIG_MIPS_CLOCK_VSYSCALL
static __always_inline long gettimeofday_fallback(struct timeval *_tv,
struct timezone *_tz)
{
register struct timezone *tz asm("a1") = _tz;
register struct timeval *tv asm("a0") = _tv;
register long ret asm("v0");
register long nr asm("v0") = __NR_gettimeofday;
register long error asm("a3");
asm volatile(
" syscall\n"
: "=r" (ret), "=r" (error)
: "r" (tv), "r" (tz), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
"$14", "$15", "$24", "$25", "hi", "lo", "memory");
return error ? -ret : ret;
}
#endif
static __always_inline long clock_gettime_fallback(clockid_t _clkid,
struct timespec *_ts)
{
register struct timespec *ts asm("a1") = _ts;
register clockid_t clkid asm("a0") = _clkid;
register long ret asm("v0");
register long nr asm("v0") = __NR_clock_gettime;
register long error asm("a3");
asm volatile(
" syscall\n"
: "=r" (ret), "=r" (error)
: "r" (clkid), "r" (ts), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
"$14", "$15", "$24", "$25", "hi", "lo", "memory");
return error ? -ret : ret;
}
static __always_inline int do_realtime_coarse(struct timespec *ts,
const union mips_vdso_data *data)
{
u32 start_seq;
do {
start_seq = vdso_data_read_begin(data);
ts->tv_sec = data->xtime_sec;
ts->tv_nsec = data->xtime_nsec >> data->cs_shift;
} while (vdso_data_read_retry(data, start_seq));
return 0;
}
static __always_inline int do_monotonic_coarse(struct timespec *ts,
const union mips_vdso_data *data)
{
u32 start_seq;
u64 to_mono_sec;
u64 to_mono_nsec;
do {
start_seq = vdso_data_read_begin(data);
ts->tv_sec = data->xtime_sec;
ts->tv_nsec = data->xtime_nsec >> data->cs_shift;
to_mono_sec = data->wall_to_mono_sec;
to_mono_nsec = data->wall_to_mono_nsec;
} while (vdso_data_read_retry(data, start_seq));
ts->tv_sec += to_mono_sec;
timespec_add_ns(ts, to_mono_nsec);
return 0;
}
#ifdef CONFIG_CSRC_R4K
static __always_inline u64 read_r4k_count(void)
{
unsigned int count;
__asm__ __volatile__(
" .set push\n"
" .set mips32r2\n"
" rdhwr %0, $2\n"
" .set pop\n"
: "=r" (count));
return count;
}
#endif
#ifdef CONFIG_CLKSRC_MIPS_GIC
static __always_inline u64 read_gic_count(const union mips_vdso_data *data)
{
void __iomem *gic = get_gic(data);
u32 hi, hi2, lo;
do {
hi = __raw_readl(gic + sizeof(lo));
lo = __raw_readl(gic);
hi2 = __raw_readl(gic + sizeof(lo));
} while (hi2 != hi);
return (((u64)hi) << 32) + lo;
}
#endif
static __always_inline u64 get_ns(const union mips_vdso_data *data)
{
u64 cycle_now, delta, nsec;
switch (data->clock_mode) {
#ifdef CONFIG_CSRC_R4K
case VDSO_CLOCK_R4K:
cycle_now = read_r4k_count();
break;
#endif
#ifdef CONFIG_CLKSRC_MIPS_GIC
case VDSO_CLOCK_GIC:
cycle_now = read_gic_count(data);
break;
#endif
default:
return 0;
}
delta = (cycle_now - data->cs_cycle_last) & data->cs_mask;
nsec = (delta * data->cs_mult) + data->xtime_nsec;
nsec >>= data->cs_shift;
return nsec;
}
static __always_inline int do_realtime(struct timespec *ts,
const union mips_vdso_data *data)
{
u32 start_seq;
u64 ns;
do {
start_seq = vdso_data_read_begin(data);
if (data->clock_mode == VDSO_CLOCK_NONE)
return -ENOSYS;
ts->tv_sec = data->xtime_sec;
ns = get_ns(data);
} while (vdso_data_read_retry(data, start_seq));
ts->tv_nsec = 0;
timespec_add_ns(ts, ns);
return 0;
}
static __always_inline int do_monotonic(struct timespec *ts,
const union mips_vdso_data *data)
{
u32 start_seq;
u64 ns;
u64 to_mono_sec;
u64 to_mono_nsec;
do {
start_seq = vdso_data_read_begin(data);
if (data->clock_mode == VDSO_CLOCK_NONE)
return -ENOSYS;
ts->tv_sec = data->xtime_sec;
ns = get_ns(data);
to_mono_sec = data->wall_to_mono_sec;
to_mono_nsec = data->wall_to_mono_nsec;
} while (vdso_data_read_retry(data, start_seq));
ts->tv_sec += to_mono_sec;
ts->tv_nsec = 0;
timespec_add_ns(ts, ns + to_mono_nsec);
return 0;
}
#ifdef CONFIG_MIPS_CLOCK_VSYSCALL
/*
* This is behind the ifdef so that we don't provide the symbol when there's no
* possibility of there being a usable clocksource, because there's nothing we
* can do without it. When libc fails the symbol lookup it should fall back on
* the standard syscall path.
*/
int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
{
const union mips_vdso_data *data = get_vdso_data();
struct timespec ts;
int ret;
ret = do_realtime(&ts, data);
if (ret)
return gettimeofday_fallback(tv, tz);
if (tv) {
tv->tv_sec = ts.tv_sec;
tv->tv_usec = ts.tv_nsec / 1000;
}
if (tz) {
tz->tz_minuteswest = data->tz_minuteswest;
tz->tz_dsttime = data->tz_dsttime;
}
return 0;
}
#endif /* CONFIG_MIPS_CLOCK_VSYSCALL */
int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts)
{
const union mips_vdso_data *data = get_vdso_data();
int ret = -1;
switch (clkid) {
case CLOCK_REALTIME_COARSE:
ret = do_realtime_coarse(ts, data);
break;
case CLOCK_MONOTONIC_COARSE:
ret = do_monotonic_coarse(ts, data);
break;
case CLOCK_REALTIME:
ret = do_realtime(ts, data);
break;
case CLOCK_MONOTONIC:
ret = do_monotonic(ts, data);
break;
default:
break;
}
if (ret)
ret = clock_gettime_fallback(clkid, ts);
return ret;
}

View File

@ -50,7 +50,7 @@ endif
BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -nostdinc BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -nostdinc
BOOTARFLAGS := -cr$(KBUILD_ARFLAGS) BOOTARFLAGS := -crD
ifdef CONFIG_CC_IS_CLANG ifdef CONFIG_CC_IS_CLANG
BOOTCFLAGS += $(CLANG_FLAGS) BOOTCFLAGS += $(CLANG_FLAGS)

View File

@ -36,8 +36,8 @@
#include "book3s.h" #include "book3s.h"
#include "trace.h" #include "trace.h"
#define VM_STAT(x) offsetof(struct kvm, stat.x), KVM_STAT_VM #define VM_STAT(x, ...) offsetof(struct kvm, stat.x), KVM_STAT_VM, ## __VA_ARGS__
#define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU #define VCPU_STAT(x, ...) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU, ## __VA_ARGS__
/* #define EXIT_DEBUG */ /* #define EXIT_DEBUG */
@ -69,8 +69,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "pthru_all", VCPU_STAT(pthru_all) }, { "pthru_all", VCPU_STAT(pthru_all) },
{ "pthru_host", VCPU_STAT(pthru_host) }, { "pthru_host", VCPU_STAT(pthru_host) },
{ "pthru_bad_aff", VCPU_STAT(pthru_bad_aff) }, { "pthru_bad_aff", VCPU_STAT(pthru_bad_aff) },
{ "largepages_2M", VM_STAT(num_2M_pages) }, { "largepages_2M", VM_STAT(num_2M_pages, .mode = 0444) },
{ "largepages_1G", VM_STAT(num_1G_pages) }, { "largepages_1G", VM_STAT(num_1G_pages, .mode = 0444) },
{ NULL } { NULL }
}; };

View File

@ -22,6 +22,7 @@
#define REG_L __REG_SEL(ld, lw) #define REG_L __REG_SEL(ld, lw)
#define REG_S __REG_SEL(sd, sw) #define REG_S __REG_SEL(sd, sw)
#define REG_SC __REG_SEL(sc.d, sc.w)
#define SZREG __REG_SEL(8, 4) #define SZREG __REG_SEL(8, 4)
#define LGREG __REG_SEL(3, 2) #define LGREG __REG_SEL(3, 2)

View File

@ -98,7 +98,26 @@ _save_context:
*/ */
.macro RESTORE_ALL .macro RESTORE_ALL
REG_L a0, PT_SSTATUS(sp) REG_L a0, PT_SSTATUS(sp)
REG_L a2, PT_SEPC(sp) /*
* The current load reservation is effectively part of the processor's
* state, in the sense that load reservations cannot be shared between
* different hart contexts. We can't actually save and restore a load
* reservation, so instead here we clear any existing reservation --
* it's always legal for implementations to clear load reservations at
* any point (as long as the forward progress guarantee is kept, but
* we'll ignore that here).
*
* Dangling load reservations can be the result of taking a trap in the
* middle of an LR/SC sequence, but can also be the result of a taken
* forward branch around an SC -- which is how we implement CAS. As a
* result we need to clear reservations between the last CAS and the
* jump back to the new context. While it is unlikely the store
* completes, implementations are allowed to expand reservations to be
* arbitrarily large.
*/
REG_L a2, PT_SEPC(sp)
REG_SC x0, a2, PT_SEPC(sp)
csrw CSR_SSTATUS, a0 csrw CSR_SSTATUS, a0
csrw CSR_SEPC, a2 csrw CSR_SEPC, a2

View File

@ -11,6 +11,7 @@
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/sizes.h> #include <linux/sizes.h>
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <linux/libfdt.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
@ -82,6 +83,8 @@ disable:
} }
#endif /* CONFIG_BLK_DEV_INITRD */ #endif /* CONFIG_BLK_DEV_INITRD */
static phys_addr_t dtb_early_pa __initdata;
void __init setup_bootmem(void) void __init setup_bootmem(void)
{ {
struct memblock_region *reg; struct memblock_region *reg;
@ -117,7 +120,12 @@ void __init setup_bootmem(void)
setup_initrd(); setup_initrd();
#endif /* CONFIG_BLK_DEV_INITRD */ #endif /* CONFIG_BLK_DEV_INITRD */
early_init_fdt_reserve_self(); /*
* Avoid using early_init_fdt_reserve_self() since __pa() does
* not work for DTB pointers that are fixmap addresses
*/
memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
early_init_fdt_scan_reserved_mem(); early_init_fdt_scan_reserved_mem();
memblock_allow_resize(); memblock_allow_resize();
memblock_dump_all(); memblock_dump_all();
@ -393,6 +401,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
/* Save pointer to DTB for early FDT parsing */ /* Save pointer to DTB for early FDT parsing */
dtb_early_va = (void *)fix_to_virt(FIX_FDT) + (dtb_pa & ~PAGE_MASK); dtb_early_va = (void *)fix_to_virt(FIX_FDT) + (dtb_pa & ~PAGE_MASK);
/* Save physical address for memblock reservation */
dtb_early_pa = dtb_pa;
} }
static void __init setup_vm_final(void) static void __init setup_vm_final(void)

View File

@ -44,6 +44,7 @@ CONFIG_NR_CPUS=512
CONFIG_NUMA=y CONFIG_NUMA=y
CONFIG_HZ_100=y CONFIG_HZ_100=y
CONFIG_KEXEC_FILE=y CONFIG_KEXEC_FILE=y
CONFIG_KEXEC_SIG=y
CONFIG_EXPOLINE=y CONFIG_EXPOLINE=y
CONFIG_EXPOLINE_AUTO=y CONFIG_EXPOLINE_AUTO=y
CONFIG_CHSC_SCH=y CONFIG_CHSC_SCH=y
@ -69,12 +70,13 @@ CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_MODULE_SIG=y
CONFIG_MODULE_SIG_SHA256=y CONFIG_MODULE_SIG_SHA256=y
CONFIG_UNUSED_SYMBOLS=y
CONFIG_BLK_DEV_INTEGRITY=y CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_THROTTLING=y CONFIG_BLK_DEV_THROTTLING=y
CONFIG_BLK_WBT=y CONFIG_BLK_WBT=y
CONFIG_BLK_CGROUP_IOLATENCY=y CONFIG_BLK_CGROUP_IOLATENCY=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_IBM_PARTITION=y CONFIG_IBM_PARTITION=y
CONFIG_BSD_DISKLABEL=y CONFIG_BSD_DISKLABEL=y
@ -370,6 +372,7 @@ CONFIG_NETLINK_DIAG=m
CONFIG_CGROUP_NET_PRIO=y CONFIG_CGROUP_NET_PRIO=y
CONFIG_BPF_JIT=y CONFIG_BPF_JIT=y
CONFIG_NET_PKTGEN=m CONFIG_NET_PKTGEN=m
# CONFIG_NET_DROP_MONITOR is not set
CONFIG_PCI=y CONFIG_PCI=y
CONFIG_PCI_DEBUG=y CONFIG_PCI_DEBUG=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
@ -424,6 +427,7 @@ CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_WRITECACHE=m CONFIG_DM_WRITECACHE=m
CONFIG_DM_CLONE=m
CONFIG_DM_MIRROR=m CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m CONFIG_DM_RAID=m
@ -435,6 +439,7 @@ CONFIG_DM_DELAY=m
CONFIG_DM_UEVENT=y CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_SWITCH=m CONFIG_DM_SWITCH=m
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_BONDING=m CONFIG_BONDING=m
@ -489,6 +494,7 @@ CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_NVIDIA is not set # CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set # CONFIG_NET_VENDOR_OKI is not set
# CONFIG_NET_VENDOR_PACKET_ENGINES is not set # CONFIG_NET_VENDOR_PACKET_ENGINES is not set
# CONFIG_NET_VENDOR_PENSANDO is not set
# CONFIG_NET_VENDOR_QLOGIC is not set # CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set # CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RDC is not set # CONFIG_NET_VENDOR_RDC is not set
@ -538,15 +544,16 @@ CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_NOWAYOUT=y CONFIG_WATCHDOG_NOWAYOUT=y
CONFIG_SOFT_WATCHDOG=m CONFIG_SOFT_WATCHDOG=m
CONFIG_DIAG288_WATCHDOG=m CONFIG_DIAG288_WATCHDOG=m
CONFIG_DRM=y CONFIG_FB=y
CONFIG_DRM_VIRTIO_GPU=y
CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_HID is not set # CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set # CONFIG_USB_SUPPORT is not set
CONFIG_INFINIBAND=m CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_MLX4_INFINIBAND=m CONFIG_MLX4_INFINIBAND=m
CONFIG_MLX5_INFINIBAND=m CONFIG_MLX5_INFINIBAND=m
CONFIG_SYNC_FILE=y
CONFIG_VFIO=m CONFIG_VFIO=m
CONFIG_VFIO_PCI=m CONFIG_VFIO_PCI=m
CONFIG_VFIO_MDEV=m CONFIG_VFIO_MDEV=m
@ -580,6 +587,8 @@ CONFIG_NILFS2_FS=m
CONFIG_FS_DAX=y CONFIG_FS_DAX=y
CONFIG_EXPORTFS_BLOCK_OPS=y CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FS_ENCRYPTION=y CONFIG_FS_ENCRYPTION=y
CONFIG_FS_VERITY=y
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
CONFIG_FANOTIFY=y CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA_NETLINK_INTERFACE=y CONFIG_QUOTA_NETLINK_INTERFACE=y
@ -589,6 +598,7 @@ CONFIG_QFMT_V2=m
CONFIG_AUTOFS4_FS=m CONFIG_AUTOFS4_FS=m
CONFIG_FUSE_FS=y CONFIG_FUSE_FS=y
CONFIG_CUSE=m CONFIG_CUSE=m
CONFIG_VIRTIO_FS=m
CONFIG_OVERLAY_FS=m CONFIG_OVERLAY_FS=m
CONFIG_FSCACHE=m CONFIG_FSCACHE=m
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
@ -648,12 +658,15 @@ CONFIG_FORTIFY_SOURCE=y
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_DISABLE=y CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_LOCKDOWN_LSM=y
CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y
CONFIG_INTEGRITY_SIGNATURE=y CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_IMA=y CONFIG_IMA=y
CONFIG_IMA_DEFAULT_HASH_SHA256=y CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_CRYPTO_USER=m CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_PCRYPT=m CONFIG_CRYPTO_PCRYPT=m
@ -664,10 +677,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECRDSA=m CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_CHACHA20POLY1305=m CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m CONFIG_CRYPTO_AEGIS128=m
CONFIG_CRYPTO_AEGIS128L=m
CONFIG_CRYPTO_AEGIS256=m
CONFIG_CRYPTO_MORUS640=m
CONFIG_CRYPTO_MORUS1280=m
CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_CFB=m
CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_LRW=m
CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_PCBC=m
@ -739,7 +748,6 @@ CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y CONFIG_GDB_SCRIPTS=y
CONFIG_FRAME_WARN=1024 CONFIG_FRAME_WARN=1024
CONFIG_UNUSED_SYMBOLS=y
CONFIG_HEADERS_INSTALL=y CONFIG_HEADERS_INSTALL=y
CONFIG_HEADERS_CHECK=y CONFIG_HEADERS_CHECK=y
CONFIG_DEBUG_SECTION_MISMATCH=y CONFIG_DEBUG_SECTION_MISMATCH=y

View File

@ -44,6 +44,7 @@ CONFIG_NUMA=y
# CONFIG_NUMA_EMU is not set # CONFIG_NUMA_EMU is not set
CONFIG_HZ_100=y CONFIG_HZ_100=y
CONFIG_KEXEC_FILE=y CONFIG_KEXEC_FILE=y
CONFIG_KEXEC_SIG=y
CONFIG_EXPOLINE=y CONFIG_EXPOLINE=y
CONFIG_EXPOLINE_AUTO=y CONFIG_EXPOLINE_AUTO=y
CONFIG_CHSC_SCH=y CONFIG_CHSC_SCH=y
@ -66,11 +67,12 @@ CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_MODULE_SIG=y
CONFIG_MODULE_SIG_SHA256=y CONFIG_MODULE_SIG_SHA256=y
CONFIG_UNUSED_SYMBOLS=y
CONFIG_BLK_DEV_THROTTLING=y CONFIG_BLK_DEV_THROTTLING=y
CONFIG_BLK_WBT=y CONFIG_BLK_WBT=y
CONFIG_BLK_CGROUP_IOLATENCY=y CONFIG_BLK_CGROUP_IOLATENCY=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_IBM_PARTITION=y CONFIG_IBM_PARTITION=y
CONFIG_BSD_DISKLABEL=y CONFIG_BSD_DISKLABEL=y
@ -363,6 +365,7 @@ CONFIG_NETLINK_DIAG=m
CONFIG_CGROUP_NET_PRIO=y CONFIG_CGROUP_NET_PRIO=y
CONFIG_BPF_JIT=y CONFIG_BPF_JIT=y
CONFIG_NET_PKTGEN=m CONFIG_NET_PKTGEN=m
# CONFIG_NET_DROP_MONITOR is not set
CONFIG_PCI=y CONFIG_PCI=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_S390=y CONFIG_HOTPLUG_PCI_S390=y
@ -418,6 +421,7 @@ CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_WRITECACHE=m CONFIG_DM_WRITECACHE=m
CONFIG_DM_CLONE=m
CONFIG_DM_MIRROR=m CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m CONFIG_DM_RAID=m
@ -429,6 +433,7 @@ CONFIG_DM_DELAY=m
CONFIG_DM_UEVENT=y CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_SWITCH=m CONFIG_DM_SWITCH=m
CONFIG_DM_INTEGRITY=m CONFIG_DM_INTEGRITY=m
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
@ -484,6 +489,7 @@ CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_NVIDIA is not set # CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set # CONFIG_NET_VENDOR_OKI is not set
# CONFIG_NET_VENDOR_PACKET_ENGINES is not set # CONFIG_NET_VENDOR_PACKET_ENGINES is not set
# CONFIG_NET_VENDOR_PENSANDO is not set
# CONFIG_NET_VENDOR_QLOGIC is not set # CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set # CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RDC is not set # CONFIG_NET_VENDOR_RDC is not set
@ -533,16 +539,16 @@ CONFIG_WATCHDOG_CORE=y
CONFIG_WATCHDOG_NOWAYOUT=y CONFIG_WATCHDOG_NOWAYOUT=y
CONFIG_SOFT_WATCHDOG=m CONFIG_SOFT_WATCHDOG=m
CONFIG_DIAG288_WATCHDOG=m CONFIG_DIAG288_WATCHDOG=m
CONFIG_DRM=y CONFIG_FB=y
CONFIG_DRM_VIRTIO_GPU=y
# CONFIG_BACKLIGHT_CLASS_DEVICE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_HID is not set # CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set # CONFIG_USB_SUPPORT is not set
CONFIG_INFINIBAND=m CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_MLX4_INFINIBAND=m CONFIG_MLX4_INFINIBAND=m
CONFIG_MLX5_INFINIBAND=m CONFIG_MLX5_INFINIBAND=m
CONFIG_SYNC_FILE=y
CONFIG_VFIO=m CONFIG_VFIO=m
CONFIG_VFIO_PCI=m CONFIG_VFIO_PCI=m
CONFIG_VFIO_MDEV=m CONFIG_VFIO_MDEV=m
@ -573,6 +579,8 @@ CONFIG_NILFS2_FS=m
CONFIG_FS_DAX=y CONFIG_FS_DAX=y
CONFIG_EXPORTFS_BLOCK_OPS=y CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FS_ENCRYPTION=y CONFIG_FS_ENCRYPTION=y
CONFIG_FS_VERITY=y
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
CONFIG_FANOTIFY=y CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA_NETLINK_INTERFACE=y CONFIG_QUOTA_NETLINK_INTERFACE=y
@ -581,6 +589,7 @@ CONFIG_QFMT_V2=m
CONFIG_AUTOFS4_FS=m CONFIG_AUTOFS4_FS=m
CONFIG_FUSE_FS=y CONFIG_FUSE_FS=y
CONFIG_CUSE=m CONFIG_CUSE=m
CONFIG_VIRTIO_FS=m
CONFIG_OVERLAY_FS=m CONFIG_OVERLAY_FS=m
CONFIG_FSCACHE=m CONFIG_FSCACHE=m
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
@ -639,12 +648,15 @@ CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_DISABLE=y CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_LOCKDOWN_LSM=y
CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y
CONFIG_INTEGRITY_SIGNATURE=y CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_IMA=y CONFIG_IMA=y
CONFIG_IMA_DEFAULT_HASH_SHA256=y CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_CRYPTO_FIPS=y CONFIG_CRYPTO_FIPS=y
CONFIG_CRYPTO_USER=m CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
@ -656,10 +668,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECRDSA=m CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_CHACHA20POLY1305=m CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m CONFIG_CRYPTO_AEGIS128=m
CONFIG_CRYPTO_AEGIS128L=m
CONFIG_CRYPTO_AEGIS256=m
CONFIG_CRYPTO_MORUS640=m
CONFIG_CRYPTO_MORUS1280=m
CONFIG_CRYPTO_CFB=m CONFIG_CRYPTO_CFB=m
CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_LRW=m
CONFIG_CRYPTO_OFB=m CONFIG_CRYPTO_OFB=m
@ -727,7 +735,6 @@ CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y CONFIG_GDB_SCRIPTS=y
CONFIG_FRAME_WARN=1024 CONFIG_FRAME_WARN=1024
CONFIG_UNUSED_SYMBOLS=y
CONFIG_DEBUG_SECTION_MISMATCH=y CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_MEMORY_INIT=y CONFIG_DEBUG_MEMORY_INIT=y

View File

@ -61,7 +61,7 @@ CONFIG_RAW_DRIVER=y
CONFIG_CONFIGFS_FS=y CONFIG_CONFIGFS_FS=y
# CONFIG_MISC_FILESYSTEMS is not set # CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set # CONFIG_NETWORK_FILESYSTEMS is not set
# CONFIG_DIMLIB is not set CONFIG_LSM="yama,loadpin,safesetid,integrity"
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_FS=y CONFIG_DEBUG_FS=y

View File

@ -41,7 +41,7 @@ __ATOMIC_OPS(__atomic64_xor, long, "laxg")
#undef __ATOMIC_OP #undef __ATOMIC_OP
#define __ATOMIC_CONST_OP(op_name, op_type, op_string, op_barrier) \ #define __ATOMIC_CONST_OP(op_name, op_type, op_string, op_barrier) \
static inline void op_name(op_type val, op_type *ptr) \ static __always_inline void op_name(op_type val, op_type *ptr) \
{ \ { \
asm volatile( \ asm volatile( \
op_string " %[ptr],%[val]\n" \ op_string " %[ptr],%[val]\n" \

View File

@ -56,7 +56,7 @@ __bitops_byte(unsigned long nr, volatile unsigned long *ptr)
return ((unsigned char *)ptr) + ((nr ^ (BITS_PER_LONG - 8)) >> 3); return ((unsigned char *)ptr) + ((nr ^ (BITS_PER_LONG - 8)) >> 3);
} }
static inline void arch_set_bit(unsigned long nr, volatile unsigned long *ptr) static __always_inline void arch_set_bit(unsigned long nr, volatile unsigned long *ptr)
{ {
unsigned long *addr = __bitops_word(nr, ptr); unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask; unsigned long mask;
@ -77,7 +77,7 @@ static inline void arch_set_bit(unsigned long nr, volatile unsigned long *ptr)
__atomic64_or(mask, (long *)addr); __atomic64_or(mask, (long *)addr);
} }
static inline void arch_clear_bit(unsigned long nr, volatile unsigned long *ptr) static __always_inline void arch_clear_bit(unsigned long nr, volatile unsigned long *ptr)
{ {
unsigned long *addr = __bitops_word(nr, ptr); unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask; unsigned long mask;
@ -98,8 +98,8 @@ static inline void arch_clear_bit(unsigned long nr, volatile unsigned long *ptr)
__atomic64_and(mask, (long *)addr); __atomic64_and(mask, (long *)addr);
} }
static inline void arch_change_bit(unsigned long nr, static __always_inline void arch_change_bit(unsigned long nr,
volatile unsigned long *ptr) volatile unsigned long *ptr)
{ {
unsigned long *addr = __bitops_word(nr, ptr); unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask; unsigned long mask;

View File

@ -171,7 +171,7 @@ typedef struct { unsigned char bytes[16]; } cpacf_mask_t;
* *
* Returns 1 if @func is available for @opcode, 0 otherwise * Returns 1 if @func is available for @opcode, 0 otherwise
*/ */
static inline void __cpacf_query(unsigned int opcode, cpacf_mask_t *mask) static __always_inline void __cpacf_query(unsigned int opcode, cpacf_mask_t *mask)
{ {
register unsigned long r0 asm("0") = 0; /* query function */ register unsigned long r0 asm("0") = 0; /* query function */
register unsigned long r1 asm("1") = (unsigned long) mask; register unsigned long r1 asm("1") = (unsigned long) mask;

View File

@ -28,6 +28,8 @@ asm(".include \"asm/cpu_mf-insn.h\"\n");
CPU_MF_INT_SF_PRA|CPU_MF_INT_SF_SACA| \ CPU_MF_INT_SF_PRA|CPU_MF_INT_SF_SACA| \
CPU_MF_INT_SF_LSDA) CPU_MF_INT_SF_LSDA)
#define CPU_MF_SF_RIBM_NOTAV 0x1 /* Sampling unavailable */
/* CPU measurement facility support */ /* CPU measurement facility support */
static inline int cpum_cf_avail(void) static inline int cpum_cf_avail(void)
{ {
@ -69,7 +71,8 @@ struct hws_qsi_info_block { /* Bit(s) */
unsigned long max_sampl_rate; /* 16-23: maximum sampling interval*/ unsigned long max_sampl_rate; /* 16-23: maximum sampling interval*/
unsigned long tear; /* 24-31: TEAR contents */ unsigned long tear; /* 24-31: TEAR contents */
unsigned long dear; /* 32-39: DEAR contents */ unsigned long dear; /* 32-39: DEAR contents */
unsigned int rsvrd0; /* 40-43: reserved */ unsigned int rsvrd0:24; /* 40-42: reserved */
unsigned int ribm:8; /* 43: Reserved by IBM */
unsigned int cpu_speed; /* 44-47: CPU speed */ unsigned int cpu_speed; /* 44-47: CPU speed */
unsigned long long rsvrd1; /* 48-55: reserved */ unsigned long long rsvrd1; /* 48-55: reserved */
unsigned long long rsvrd2; /* 56-63: reserved */ unsigned long long rsvrd2; /* 56-63: reserved */
@ -220,7 +223,8 @@ enum stcctm_ctr_set {
MT_DIAG = 5, MT_DIAG = 5,
MT_DIAG_CLEARING = 9, /* clears loss-of-MT-ctr-data alert */ MT_DIAG_CLEARING = 9, /* clears loss-of-MT-ctr-data alert */
}; };
static inline int stcctm(enum stcctm_ctr_set set, u64 range, u64 *dest)
static __always_inline int stcctm(enum stcctm_ctr_set set, u64 range, u64 *dest)
{ {
int cc; int cc;

View File

@ -12,8 +12,6 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#define is_hugepage_only_range(mm, addr, len) 0
#define hugetlb_free_pgd_range free_pgd_range #define hugetlb_free_pgd_range free_pgd_range
#define hugepages_supported() (MACHINE_HAS_EDAT1) #define hugepages_supported() (MACHINE_HAS_EDAT1)
@ -23,6 +21,13 @@ pte_t huge_ptep_get(pte_t *ptep);
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep); unsigned long addr, pte_t *ptep);
static inline bool is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len)
{
return false;
}
/* /*
* If the arch doesn't supply something else, assume that hugepage * If the arch doesn't supply something else, assume that hugepage
* size aligned regions are ok without further preparation. * size aligned regions are ok without further preparation.

View File

@ -20,7 +20,7 @@
* We use a brcl 0,2 instruction for jump labels at compile time so it * We use a brcl 0,2 instruction for jump labels at compile time so it
* can be easily distinguished from a hotpatch generated instruction. * can be easily distinguished from a hotpatch generated instruction.
*/ */
static inline bool arch_static_branch(struct static_key *key, bool branch) static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
{ {
asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n" asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n"
".pushsection __jump_table,\"aw\"\n" ".pushsection __jump_table,\"aw\"\n"
@ -34,7 +34,7 @@ label:
return true; return true;
} }
static inline bool arch_static_branch_jump(struct static_key *key, bool branch) static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
{ {
asm_volatile_goto("0: brcl 15,%l[label]\n" asm_volatile_goto("0: brcl 15,%l[label]\n"
".pushsection __jump_table,\"aw\"\n" ".pushsection __jump_table,\"aw\"\n"

View File

@ -997,9 +997,9 @@ static inline pte_t pte_mkhuge(pte_t pte)
#define IPTE_NODAT 0x400 #define IPTE_NODAT 0x400
#define IPTE_GUEST_ASCE 0x800 #define IPTE_GUEST_ASCE 0x800
static inline void __ptep_ipte(unsigned long address, pte_t *ptep, static __always_inline void __ptep_ipte(unsigned long address, pte_t *ptep,
unsigned long opt, unsigned long asce, unsigned long opt, unsigned long asce,
int local) int local)
{ {
unsigned long pto = (unsigned long) ptep; unsigned long pto = (unsigned long) ptep;
@ -1020,8 +1020,8 @@ static inline void __ptep_ipte(unsigned long address, pte_t *ptep,
: [r1] "a" (pto), [m4] "i" (local) : "memory"); : [r1] "a" (pto), [m4] "i" (local) : "memory");
} }
static inline void __ptep_ipte_range(unsigned long address, int nr, static __always_inline void __ptep_ipte_range(unsigned long address, int nr,
pte_t *ptep, int local) pte_t *ptep, int local)
{ {
unsigned long pto = (unsigned long) ptep; unsigned long pto = (unsigned long) ptep;
@ -1269,7 +1269,8 @@ static inline pte_t *pte_offset(pmd_t *pmd, unsigned long address)
#define pte_offset_kernel(pmd, address) pte_offset(pmd, address) #define pte_offset_kernel(pmd, address) pte_offset(pmd, address)
#define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address) #define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address)
#define pte_unmap(pte) do { } while (0)
static inline void pte_unmap(pte_t *pte) { }
static inline bool gup_fast_permitted(unsigned long start, unsigned long end) static inline bool gup_fast_permitted(unsigned long start, unsigned long end)
{ {
@ -1435,9 +1436,9 @@ static inline void __pmdp_csp(pmd_t *pmdp)
#define IDTE_NODAT 0x1000 #define IDTE_NODAT 0x1000
#define IDTE_GUEST_ASCE 0x2000 #define IDTE_GUEST_ASCE 0x2000
static inline void __pmdp_idte(unsigned long addr, pmd_t *pmdp, static __always_inline void __pmdp_idte(unsigned long addr, pmd_t *pmdp,
unsigned long opt, unsigned long asce, unsigned long opt, unsigned long asce,
int local) int local)
{ {
unsigned long sto; unsigned long sto;
@ -1461,9 +1462,9 @@ static inline void __pmdp_idte(unsigned long addr, pmd_t *pmdp,
} }
} }
static inline void __pudp_idte(unsigned long addr, pud_t *pudp, static __always_inline void __pudp_idte(unsigned long addr, pud_t *pudp,
unsigned long opt, unsigned long asce, unsigned long opt, unsigned long asce,
int local) int local)
{ {
unsigned long r3o; unsigned long r3o;

View File

@ -111,7 +111,7 @@ struct qib {
/* private: */ /* private: */
u8 res[88]; u8 res[88];
/* public: */ /* public: */
u8 parm[QDIO_MAX_BUFFERS_PER_Q]; u8 parm[128];
} __attribute__ ((packed, aligned(256))); } __attribute__ ((packed, aligned(256)));
/** /**

View File

@ -390,7 +390,7 @@ static size_t cf_diag_getctrset(struct cf_ctrset_entry *ctrdata, int ctrset,
debug_sprintf_event(cf_diag_dbg, 6, debug_sprintf_event(cf_diag_dbg, 6,
"%s ctrset %d ctrset_size %zu cfvn %d csvn %d" "%s ctrset %d ctrset_size %zu cfvn %d csvn %d"
" need %zd rc:%d\n", " need %zd rc %d\n",
__func__, ctrset, ctrset_size, cpuhw->info.cfvn, __func__, ctrset, ctrset_size, cpuhw->info.cfvn,
cpuhw->info.csvn, need, rc); cpuhw->info.csvn, need, rc);
return need; return need;
@ -567,7 +567,7 @@ static int cf_diag_add(struct perf_event *event, int flags)
int err = 0; int err = 0;
debug_sprintf_event(cf_diag_dbg, 5, debug_sprintf_event(cf_diag_dbg, 5,
"%s event %p cpu %d flags %#x cpuhw:%p\n", "%s event %p cpu %d flags %#x cpuhw %p\n",
__func__, event, event->cpu, flags, cpuhw); __func__, event, event->cpu, flags, cpuhw);
if (cpuhw->flags & PMU_F_IN_USE) { if (cpuhw->flags & PMU_F_IN_USE) {

View File

@ -803,6 +803,12 @@ static int __hw_perf_event_init(struct perf_event *event)
goto out; goto out;
} }
if (si.ribm & CPU_MF_SF_RIBM_NOTAV) {
pr_warn("CPU Measurement Facility sampling is temporarily not available\n");
err = -EBUSY;
goto out;
}
/* Always enable basic sampling */ /* Always enable basic sampling */
SAMPL_FLAGS(hwc) = PERF_CPUM_SF_BASIC_MODE; SAMPL_FLAGS(hwc) = PERF_CPUM_SF_BASIC_MODE;
@ -895,7 +901,7 @@ static int cpumsf_pmu_event_init(struct perf_event *event)
/* Check online status of the CPU to which the event is pinned */ /* Check online status of the CPU to which the event is pinned */
if (event->cpu >= 0 && !cpu_online(event->cpu)) if (event->cpu >= 0 && !cpu_online(event->cpu))
return -ENODEV; return -ENODEV;
/* Force reset of idle/hv excludes regardless of what the /* Force reset of idle/hv excludes regardless of what the
* user requested. * user requested.

View File

@ -332,7 +332,7 @@ static inline int plo_test_bit(unsigned char nr)
return cc == 0; return cc == 0;
} }
static inline void __insn32_query(unsigned int opcode, u8 query[32]) static __always_inline void __insn32_query(unsigned int opcode, u8 *query)
{ {
register unsigned long r0 asm("0") = 0; /* query function */ register unsigned long r0 asm("0") = 0; /* query function */
register unsigned long r1 asm("1") = (unsigned long) query; register unsigned long r1 asm("1") = (unsigned long) query;
@ -340,9 +340,9 @@ static inline void __insn32_query(unsigned int opcode, u8 query[32])
asm volatile( asm volatile(
/* Parameter regs are ignored */ /* Parameter regs are ignored */
" .insn rrf,%[opc] << 16,2,4,6,0\n" " .insn rrf,%[opc] << 16,2,4,6,0\n"
: "=m" (*query) :
: "d" (r0), "a" (r1), [opc] "i" (opcode) : "d" (r0), "a" (r1), [opc] "i" (opcode)
: "cc"); : "cc", "memory");
} }
#define INSN_SORTL 0xb938 #define INSN_SORTL 0xb938

View File

@ -66,7 +66,7 @@ static inline int clp_get_ilp(unsigned long *ilp)
/* /*
* Call Logical Processor with c=0, the give constant lps and an lpcb request. * Call Logical Processor with c=0, the give constant lps and an lpcb request.
*/ */
static inline int clp_req(void *data, unsigned int lps) static __always_inline int clp_req(void *data, unsigned int lps)
{ {
struct { u8 _[CLP_BLK_SIZE]; } *req = data; struct { u8 _[CLP_BLK_SIZE]; } *req = data;
u64 ignored; u64 ignored;

View File

@ -219,13 +219,6 @@ enum {
PFERR_WRITE_MASK | \ PFERR_WRITE_MASK | \
PFERR_PRESENT_MASK) PFERR_PRESENT_MASK)
/*
* The mask used to denote special SPTEs, which can be either MMIO SPTEs or
* Access Tracking SPTEs. We use bit 62 instead of bit 63 to avoid conflicting
* with the SVE bit in EPT PTEs.
*/
#define SPTE_SPECIAL_MASK (1ULL << 62)
/* apic attention bits */ /* apic attention bits */
#define KVM_APIC_CHECK_VAPIC 0 #define KVM_APIC_CHECK_VAPIC 0
/* /*

View File

@ -485,6 +485,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
/* cpuid 0x80000008.ebx */ /* cpuid 0x80000008.ebx */
const u32 kvm_cpuid_8000_0008_ebx_x86_features = const u32 kvm_cpuid_8000_0008_ebx_x86_features =
F(CLZERO) | F(XSAVEERPTR) |
F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) | F(WBNOINVD) | F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON); F(AMD_SSB_NO) | F(AMD_STIBP) | F(AMD_STIBP_ALWAYS_ON);
@ -618,16 +619,20 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
*/ */
case 0x1f: case 0x1f:
case 0xb: { case 0xb: {
int i, level_type; int i;
/* read more entries until level_type is zero */ /*
for (i = 1; ; ++i) { * We filled in entry[0] for CPUID(EAX=<function>,
* ECX=00H) above. If its level type (ECX[15:8]) is
* zero, then the leaf is unimplemented, and we're
* done. Otherwise, continue to populate entries
* until the level type (ECX[15:8]) of the previously
* added entry is zero.
*/
for (i = 1; entry[i - 1].ecx & 0xff00; ++i) {
if (*nent >= maxnent) if (*nent >= maxnent)
goto out; goto out;
level_type = entry[i - 1].ecx & 0xff00;
if (!level_type)
break;
do_host_cpuid(&entry[i], function, i); do_host_cpuid(&entry[i], function, i);
++*nent; ++*nent;
} }
@ -969,53 +974,66 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry); EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
/* /*
* If no match is found, check whether we exceed the vCPU's limit * If the basic or extended CPUID leaf requested is higher than the
* and return the content of the highest valid _standard_ leaf instead. * maximum supported basic or extended leaf, respectively, then it is
* This is to satisfy the CPUID specification. * out of range.
*/ */
static struct kvm_cpuid_entry2* check_cpuid_limit(struct kvm_vcpu *vcpu, static bool cpuid_function_in_range(struct kvm_vcpu *vcpu, u32 function)
u32 function, u32 index)
{ {
struct kvm_cpuid_entry2 *maxlevel; struct kvm_cpuid_entry2 *max;
maxlevel = kvm_find_cpuid_entry(vcpu, function & 0x80000000, 0); max = kvm_find_cpuid_entry(vcpu, function & 0x80000000, 0);
if (!maxlevel || maxlevel->eax >= function) return max && function <= max->eax;
return NULL;
if (function & 0x80000000) {
maxlevel = kvm_find_cpuid_entry(vcpu, 0, 0);
if (!maxlevel)
return NULL;
}
return kvm_find_cpuid_entry(vcpu, maxlevel->eax, index);
} }
bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx, bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
u32 *ecx, u32 *edx, bool check_limit) u32 *ecx, u32 *edx, bool check_limit)
{ {
u32 function = *eax, index = *ecx; u32 function = *eax, index = *ecx;
struct kvm_cpuid_entry2 *best; struct kvm_cpuid_entry2 *entry;
bool entry_found = true; struct kvm_cpuid_entry2 *max;
bool found;
best = kvm_find_cpuid_entry(vcpu, function, index); entry = kvm_find_cpuid_entry(vcpu, function, index);
found = entry;
if (!best) { /*
entry_found = false; * Intel CPUID semantics treats any query for an out-of-range
if (!check_limit) * leaf as if the highest basic leaf (i.e. CPUID.0H:EAX) were
goto out; * requested. AMD CPUID semantics returns all zeroes for any
* undefined leaf, whether or not the leaf is in range.
best = check_cpuid_limit(vcpu, function, index); */
if (!entry && check_limit && !guest_cpuid_is_amd(vcpu) &&
!cpuid_function_in_range(vcpu, function)) {
max = kvm_find_cpuid_entry(vcpu, 0, 0);
if (max) {
function = max->eax;
entry = kvm_find_cpuid_entry(vcpu, function, index);
}
} }
if (entry) {
out: *eax = entry->eax;
if (best) { *ebx = entry->ebx;
*eax = best->eax; *ecx = entry->ecx;
*ebx = best->ebx; *edx = entry->edx;
*ecx = best->ecx; } else {
*edx = best->edx;
} else
*eax = *ebx = *ecx = *edx = 0; *eax = *ebx = *ecx = *edx = 0;
trace_kvm_cpuid(function, *eax, *ebx, *ecx, *edx, entry_found); /*
return entry_found; * When leaf 0BH or 1FH is defined, CL is pass-through
* and EDX is always the x2APIC ID, even for undefined
* subleaves. Index 1 will exist iff the leaf is
* implemented, so we pass through CL iff leaf 1
* exists. EDX can be copied from any existing index.
*/
if (function == 0xb || function == 0x1f) {
entry = kvm_find_cpuid_entry(vcpu, function, 1);
if (entry) {
*ecx = index & 0xff;
*edx = entry->edx;
}
}
}
trace_kvm_cpuid(function, *eax, *ebx, *ecx, *edx, found);
return found;
} }
EXPORT_SYMBOL_GPL(kvm_cpuid); EXPORT_SYMBOL_GPL(kvm_cpuid);

View File

@ -66,9 +66,10 @@
#define X2APIC_BROADCAST 0xFFFFFFFFul #define X2APIC_BROADCAST 0xFFFFFFFFul
static bool lapic_timer_advance_dynamic __read_mostly; static bool lapic_timer_advance_dynamic __read_mostly;
#define LAPIC_TIMER_ADVANCE_ADJUST_MIN 100 #define LAPIC_TIMER_ADVANCE_ADJUST_MIN 100 /* clock cycles */
#define LAPIC_TIMER_ADVANCE_ADJUST_MAX 5000 #define LAPIC_TIMER_ADVANCE_ADJUST_MAX 10000 /* clock cycles */
#define LAPIC_TIMER_ADVANCE_ADJUST_INIT 1000 #define LAPIC_TIMER_ADVANCE_NS_INIT 1000
#define LAPIC_TIMER_ADVANCE_NS_MAX 5000
/* step-by-step approximation to mitigate fluctuation */ /* step-by-step approximation to mitigate fluctuation */
#define LAPIC_TIMER_ADVANCE_ADJUST_STEP 8 #define LAPIC_TIMER_ADVANCE_ADJUST_STEP 8
@ -1504,8 +1505,8 @@ static inline void adjust_lapic_timer_advance(struct kvm_vcpu *vcpu,
timer_advance_ns += ns/LAPIC_TIMER_ADVANCE_ADJUST_STEP; timer_advance_ns += ns/LAPIC_TIMER_ADVANCE_ADJUST_STEP;
} }
if (unlikely(timer_advance_ns > LAPIC_TIMER_ADVANCE_ADJUST_MAX)) if (unlikely(timer_advance_ns > LAPIC_TIMER_ADVANCE_NS_MAX))
timer_advance_ns = LAPIC_TIMER_ADVANCE_ADJUST_INIT; timer_advance_ns = LAPIC_TIMER_ADVANCE_NS_INIT;
apic->lapic_timer.timer_advance_ns = timer_advance_ns; apic->lapic_timer.timer_advance_ns = timer_advance_ns;
} }
@ -2302,7 +2303,7 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns)
HRTIMER_MODE_ABS_HARD); HRTIMER_MODE_ABS_HARD);
apic->lapic_timer.timer.function = apic_timer_fn; apic->lapic_timer.timer.function = apic_timer_fn;
if (timer_advance_ns == -1) { if (timer_advance_ns == -1) {
apic->lapic_timer.timer_advance_ns = LAPIC_TIMER_ADVANCE_ADJUST_INIT; apic->lapic_timer.timer_advance_ns = LAPIC_TIMER_ADVANCE_NS_INIT;
lapic_timer_advance_dynamic = true; lapic_timer_advance_dynamic = true;
} else { } else {
apic->lapic_timer.timer_advance_ns = timer_advance_ns; apic->lapic_timer.timer_advance_ns = timer_advance_ns;

View File

@ -83,7 +83,17 @@ module_param(dbg, bool, 0644);
#define PTE_PREFETCH_NUM 8 #define PTE_PREFETCH_NUM 8
#define PT_FIRST_AVAIL_BITS_SHIFT 10 #define PT_FIRST_AVAIL_BITS_SHIFT 10
#define PT64_SECOND_AVAIL_BITS_SHIFT 52 #define PT64_SECOND_AVAIL_BITS_SHIFT 54
/*
* The mask used to denote special SPTEs, which can be either MMIO SPTEs or
* Access Tracking SPTEs.
*/
#define SPTE_SPECIAL_MASK (3ULL << 52)
#define SPTE_AD_ENABLED_MASK (0ULL << 52)
#define SPTE_AD_DISABLED_MASK (1ULL << 52)
#define SPTE_AD_WRPROT_ONLY_MASK (2ULL << 52)
#define SPTE_MMIO_MASK (3ULL << 52)
#define PT64_LEVEL_BITS 9 #define PT64_LEVEL_BITS 9
@ -219,12 +229,11 @@ static u64 __read_mostly shadow_present_mask;
static u64 __read_mostly shadow_me_mask; static u64 __read_mostly shadow_me_mask;
/* /*
* SPTEs used by MMUs without A/D bits are marked with shadow_acc_track_value. * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK;
* Non-present SPTEs with shadow_acc_track_value set are in place for access * shadow_acc_track_mask is the set of bits to be cleared in non-accessed
* tracking. * pages.
*/ */
static u64 __read_mostly shadow_acc_track_mask; static u64 __read_mostly shadow_acc_track_mask;
static const u64 shadow_acc_track_value = SPTE_SPECIAL_MASK;
/* /*
* The mask/shift to use for saving the original R/X bits when marking the PTE * The mask/shift to use for saving the original R/X bits when marking the PTE
@ -304,7 +313,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
{ {
BUG_ON((u64)(unsigned)access_mask != access_mask); BUG_ON((u64)(unsigned)access_mask != access_mask);
BUG_ON((mmio_mask & mmio_value) != mmio_value); BUG_ON((mmio_mask & mmio_value) != mmio_value);
shadow_mmio_value = mmio_value | SPTE_SPECIAL_MASK; shadow_mmio_value = mmio_value | SPTE_MMIO_MASK;
shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK; shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
shadow_mmio_access_mask = access_mask; shadow_mmio_access_mask = access_mask;
} }
@ -320,10 +329,27 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page *sp)
return sp->role.ad_disabled; return sp->role.ad_disabled;
} }
static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
{
/*
* When using the EPT page-modification log, the GPAs in the log
* would come from L2 rather than L1. Therefore, we need to rely
* on write protection to record dirty pages. This also bypasses
* PML, since writes now result in a vmexit.
*/
return vcpu->arch.mmu == &vcpu->arch.guest_mmu;
}
static inline bool spte_ad_enabled(u64 spte) static inline bool spte_ad_enabled(u64 spte)
{ {
MMU_WARN_ON(is_mmio_spte(spte)); MMU_WARN_ON(is_mmio_spte(spte));
return !(spte & shadow_acc_track_value); return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_DISABLED_MASK;
}
static inline bool spte_ad_need_write_protect(u64 spte)
{
MMU_WARN_ON(is_mmio_spte(spte));
return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_ENABLED_MASK;
} }
static inline u64 spte_shadow_accessed_mask(u64 spte) static inline u64 spte_shadow_accessed_mask(u64 spte)
@ -461,7 +487,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
{ {
BUG_ON(!dirty_mask != !accessed_mask); BUG_ON(!dirty_mask != !accessed_mask);
BUG_ON(!accessed_mask && !acc_track_mask); BUG_ON(!accessed_mask && !acc_track_mask);
BUG_ON(acc_track_mask & shadow_acc_track_value); BUG_ON(acc_track_mask & SPTE_SPECIAL_MASK);
shadow_user_mask = user_mask; shadow_user_mask = user_mask;
shadow_accessed_mask = accessed_mask; shadow_accessed_mask = accessed_mask;
@ -1589,16 +1615,16 @@ static bool spte_clear_dirty(u64 *sptep)
rmap_printk("rmap_clear_dirty: spte %p %llx\n", sptep, *sptep); rmap_printk("rmap_clear_dirty: spte %p %llx\n", sptep, *sptep);
MMU_WARN_ON(!spte_ad_enabled(spte));
spte &= ~shadow_dirty_mask; spte &= ~shadow_dirty_mask;
return mmu_spte_update(sptep, spte); return mmu_spte_update(sptep, spte);
} }
static bool wrprot_ad_disabled_spte(u64 *sptep) static bool spte_wrprot_for_clear_dirty(u64 *sptep)
{ {
bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT, bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT,
(unsigned long *)sptep); (unsigned long *)sptep);
if (was_writable) if (was_writable && !spte_ad_enabled(*sptep))
kvm_set_pfn_dirty(spte_to_pfn(*sptep)); kvm_set_pfn_dirty(spte_to_pfn(*sptep));
return was_writable; return was_writable;
@ -1617,10 +1643,10 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
bool flush = false; bool flush = false;
for_each_rmap_spte(rmap_head, &iter, sptep) for_each_rmap_spte(rmap_head, &iter, sptep)
if (spte_ad_enabled(*sptep)) if (spte_ad_need_write_protect(*sptep))
flush |= spte_clear_dirty(sptep); flush |= spte_wrprot_for_clear_dirty(sptep);
else else
flush |= wrprot_ad_disabled_spte(sptep); flush |= spte_clear_dirty(sptep);
return flush; return flush;
} }
@ -1631,6 +1657,11 @@ static bool spte_set_dirty(u64 *sptep)
rmap_printk("rmap_set_dirty: spte %p %llx\n", sptep, *sptep); rmap_printk("rmap_set_dirty: spte %p %llx\n", sptep, *sptep);
/*
* Similar to the !kvm_x86_ops->slot_disable_log_dirty case,
* do not bother adding back write access to pages marked
* SPTE_AD_WRPROT_ONLY_MASK.
*/
spte |= shadow_dirty_mask; spte |= shadow_dirty_mask;
return mmu_spte_update(sptep, spte); return mmu_spte_update(sptep, spte);
@ -2622,7 +2653,7 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
shadow_user_mask | shadow_x_mask | shadow_me_mask; shadow_user_mask | shadow_x_mask | shadow_me_mask;
if (sp_ad_disabled(sp)) if (sp_ad_disabled(sp))
spte |= shadow_acc_track_value; spte |= SPTE_AD_DISABLED_MASK;
else else
spte |= shadow_accessed_mask; spte |= shadow_accessed_mask;
@ -2968,7 +2999,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
sp = page_header(__pa(sptep)); sp = page_header(__pa(sptep));
if (sp_ad_disabled(sp)) if (sp_ad_disabled(sp))
spte |= shadow_acc_track_value; spte |= SPTE_AD_DISABLED_MASK;
else if (kvm_vcpu_ad_need_write_protect(vcpu))
spte |= SPTE_AD_WRPROT_ONLY_MASK;
/* /*
* For the EPT case, shadow_present_mask is 0 if hardware * For the EPT case, shadow_present_mask is 0 if hardware

View File

@ -2610,7 +2610,7 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
/* VM-entry exception error code */ /* VM-entry exception error code */
if (CC(has_error_code && if (CC(has_error_code &&
vmcs12->vm_entry_exception_error_code & GENMASK(31, 15))) vmcs12->vm_entry_exception_error_code & GENMASK(31, 16)))
return -EINVAL; return -EINVAL;
/* VM-entry interruption-info field: reserved bits */ /* VM-entry interruption-info field: reserved bits */

View File

@ -262,6 +262,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
static void intel_pmu_refresh(struct kvm_vcpu *vcpu) static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
{ {
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
struct x86_pmu_capability x86_pmu;
struct kvm_cpuid_entry2 *entry; struct kvm_cpuid_entry2 *entry;
union cpuid10_eax eax; union cpuid10_eax eax;
union cpuid10_edx edx; union cpuid10_edx edx;
@ -283,8 +284,10 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
if (!pmu->version) if (!pmu->version)
return; return;
perf_get_x86_pmu_capability(&x86_pmu);
pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters, pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters,
INTEL_PMC_MAX_GENERIC); x86_pmu.num_counters_gp);
pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1; pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1;
pmu->available_event_types = ~entry->ebx & pmu->available_event_types = ~entry->ebx &
((1ull << eax.split.mask_length) - 1); ((1ull << eax.split.mask_length) - 1);
@ -294,7 +297,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
} else { } else {
pmu->nr_arch_fixed_counters = pmu->nr_arch_fixed_counters =
min_t(int, edx.split.num_counters_fixed, min_t(int, edx.split.num_counters_fixed,
INTEL_PMC_MAX_FIXED); x86_pmu.num_counters_fixed);
pmu->counter_bitmask[KVM_PMC_FIXED] = pmu->counter_bitmask[KVM_PMC_FIXED] =
((u64)1 << edx.split.bit_width_fixed) - 1; ((u64)1 << edx.split.bit_width_fixed) - 1;
} }

View File

@ -209,6 +209,11 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
struct page *page; struct page *page;
unsigned int i; unsigned int i;
if (!boot_cpu_has_bug(X86_BUG_L1TF)) {
l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED;
return 0;
}
if (!enable_ept) { if (!enable_ept) {
l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED; l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED;
return 0; return 0;
@ -7995,12 +8000,10 @@ static int __init vmx_init(void)
* contain 'auto' which will be turned into the default 'cond' * contain 'auto' which will be turned into the default 'cond'
* mitigation mode. * mitigation mode.
*/ */
if (boot_cpu_has(X86_BUG_L1TF)) { r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
r = vmx_setup_l1d_flush(vmentry_l1d_flush_param); if (r) {
if (r) { vmx_exit();
vmx_exit(); return r;
return r;
}
} }
#ifdef CONFIG_KEXEC_CORE #ifdef CONFIG_KEXEC_CORE

View File

@ -92,8 +92,8 @@ u64 __read_mostly efer_reserved_bits = ~((u64)(EFER_SCE | EFER_LME | EFER_LMA));
static u64 __read_mostly efer_reserved_bits = ~((u64)EFER_SCE); static u64 __read_mostly efer_reserved_bits = ~((u64)EFER_SCE);
#endif #endif
#define VM_STAT(x) offsetof(struct kvm, stat.x), KVM_STAT_VM #define VM_STAT(x, ...) offsetof(struct kvm, stat.x), KVM_STAT_VM, ## __VA_ARGS__
#define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU #define VCPU_STAT(x, ...) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU, ## __VA_ARGS__
#define KVM_X2APIC_API_VALID_FLAGS (KVM_X2APIC_API_USE_32BIT_IDS | \ #define KVM_X2APIC_API_VALID_FLAGS (KVM_X2APIC_API_USE_32BIT_IDS | \
KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK) KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK)
@ -212,7 +212,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "mmu_cache_miss", VM_STAT(mmu_cache_miss) }, { "mmu_cache_miss", VM_STAT(mmu_cache_miss) },
{ "mmu_unsync", VM_STAT(mmu_unsync) }, { "mmu_unsync", VM_STAT(mmu_unsync) },
{ "remote_tlb_flush", VM_STAT(remote_tlb_flush) }, { "remote_tlb_flush", VM_STAT(remote_tlb_flush) },
{ "largepages", VM_STAT(lpages) }, { "largepages", VM_STAT(lpages, .mode = 0444) },
{ "max_mmu_page_hash_collisions", { "max_mmu_page_hash_collisions",
VM_STAT(max_mmu_page_hash_collisions) }, VM_STAT(max_mmu_page_hash_collisions) },
{ NULL } { NULL }
@ -885,34 +885,42 @@ int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
} }
EXPORT_SYMBOL_GPL(kvm_set_xcr); EXPORT_SYMBOL_GPL(kvm_set_xcr);
static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
if (cr4 & CR4_RESERVED_BITS)
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && (cr4 & X86_CR4_OSXSAVE))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMEP) && (cr4 & X86_CR4_SMEP))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMAP) && (cr4 & X86_CR4_SMAP))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_FSGSBASE) && (cr4 & X86_CR4_FSGSBASE))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_PKU) && (cr4 & X86_CR4_PKE))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
return -EINVAL;
return 0;
}
int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{ {
unsigned long old_cr4 = kvm_read_cr4(vcpu); unsigned long old_cr4 = kvm_read_cr4(vcpu);
unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE; X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
if (cr4 & CR4_RESERVED_BITS) if (kvm_valid_cr4(vcpu, cr4))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && (cr4 & X86_CR4_OSXSAVE))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMEP) && (cr4 & X86_CR4_SMEP))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMAP) && (cr4 & X86_CR4_SMAP))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_FSGSBASE) && (cr4 & X86_CR4_FSGSBASE))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_PKU) && (cr4 & X86_CR4_PKE))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
return 1; return 1;
if (is_long_mode(vcpu)) { if (is_long_mode(vcpu)) {
@ -1161,13 +1169,6 @@ static u32 msrs_to_save[] = {
MSR_ARCH_PERFMON_PERFCTR0 + 12, MSR_ARCH_PERFMON_PERFCTR0 + 13, MSR_ARCH_PERFMON_PERFCTR0 + 12, MSR_ARCH_PERFMON_PERFCTR0 + 13,
MSR_ARCH_PERFMON_PERFCTR0 + 14, MSR_ARCH_PERFMON_PERFCTR0 + 15, MSR_ARCH_PERFMON_PERFCTR0 + 14, MSR_ARCH_PERFMON_PERFCTR0 + 15,
MSR_ARCH_PERFMON_PERFCTR0 + 16, MSR_ARCH_PERFMON_PERFCTR0 + 17, MSR_ARCH_PERFMON_PERFCTR0 + 16, MSR_ARCH_PERFMON_PERFCTR0 + 17,
MSR_ARCH_PERFMON_PERFCTR0 + 18, MSR_ARCH_PERFMON_PERFCTR0 + 19,
MSR_ARCH_PERFMON_PERFCTR0 + 20, MSR_ARCH_PERFMON_PERFCTR0 + 21,
MSR_ARCH_PERFMON_PERFCTR0 + 22, MSR_ARCH_PERFMON_PERFCTR0 + 23,
MSR_ARCH_PERFMON_PERFCTR0 + 24, MSR_ARCH_PERFMON_PERFCTR0 + 25,
MSR_ARCH_PERFMON_PERFCTR0 + 26, MSR_ARCH_PERFMON_PERFCTR0 + 27,
MSR_ARCH_PERFMON_PERFCTR0 + 28, MSR_ARCH_PERFMON_PERFCTR0 + 29,
MSR_ARCH_PERFMON_PERFCTR0 + 30, MSR_ARCH_PERFMON_PERFCTR0 + 31,
MSR_ARCH_PERFMON_EVENTSEL0, MSR_ARCH_PERFMON_EVENTSEL1, MSR_ARCH_PERFMON_EVENTSEL0, MSR_ARCH_PERFMON_EVENTSEL1,
MSR_ARCH_PERFMON_EVENTSEL0 + 2, MSR_ARCH_PERFMON_EVENTSEL0 + 3, MSR_ARCH_PERFMON_EVENTSEL0 + 2, MSR_ARCH_PERFMON_EVENTSEL0 + 3,
MSR_ARCH_PERFMON_EVENTSEL0 + 4, MSR_ARCH_PERFMON_EVENTSEL0 + 5, MSR_ARCH_PERFMON_EVENTSEL0 + 4, MSR_ARCH_PERFMON_EVENTSEL0 + 5,
@ -1177,13 +1178,6 @@ static u32 msrs_to_save[] = {
MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13, MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15, MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17, MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
MSR_ARCH_PERFMON_EVENTSEL0 + 18, MSR_ARCH_PERFMON_EVENTSEL0 + 19,
MSR_ARCH_PERFMON_EVENTSEL0 + 20, MSR_ARCH_PERFMON_EVENTSEL0 + 21,
MSR_ARCH_PERFMON_EVENTSEL0 + 22, MSR_ARCH_PERFMON_EVENTSEL0 + 23,
MSR_ARCH_PERFMON_EVENTSEL0 + 24, MSR_ARCH_PERFMON_EVENTSEL0 + 25,
MSR_ARCH_PERFMON_EVENTSEL0 + 26, MSR_ARCH_PERFMON_EVENTSEL0 + 27,
MSR_ARCH_PERFMON_EVENTSEL0 + 28, MSR_ARCH_PERFMON_EVENTSEL0 + 29,
MSR_ARCH_PERFMON_EVENTSEL0 + 30, MSR_ARCH_PERFMON_EVENTSEL0 + 31,
}; };
static unsigned num_msrs_to_save; static unsigned num_msrs_to_save;
@ -5097,13 +5091,14 @@ out:
static void kvm_init_msr_list(void) static void kvm_init_msr_list(void)
{ {
struct x86_pmu_capability x86_pmu;
u32 dummy[2]; u32 dummy[2];
unsigned i, j; unsigned i, j;
BUILD_BUG_ON_MSG(INTEL_PMC_MAX_FIXED != 4, BUILD_BUG_ON_MSG(INTEL_PMC_MAX_FIXED != 4,
"Please update the fixed PMCs in msrs_to_save[]"); "Please update the fixed PMCs in msrs_to_save[]");
BUILD_BUG_ON_MSG(INTEL_PMC_MAX_GENERIC != 32,
"Please update the generic perfctr/eventsel MSRs in msrs_to_save[]"); perf_get_x86_pmu_capability(&x86_pmu);
for (i = j = 0; i < ARRAY_SIZE(msrs_to_save); i++) { for (i = j = 0; i < ARRAY_SIZE(msrs_to_save); i++) {
if (rdmsr_safe(msrs_to_save[i], &dummy[0], &dummy[1]) < 0) if (rdmsr_safe(msrs_to_save[i], &dummy[0], &dummy[1]) < 0)
@ -5145,6 +5140,15 @@ static void kvm_init_msr_list(void)
intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2) intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2)
continue; continue;
break; break;
case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 17:
if (msrs_to_save[i] - MSR_ARCH_PERFMON_PERFCTR0 >=
min(INTEL_PMC_MAX_GENERIC, x86_pmu.num_counters_gp))
continue;
break;
case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 17:
if (msrs_to_save[i] - MSR_ARCH_PERFMON_EVENTSEL0 >=
min(INTEL_PMC_MAX_GENERIC, x86_pmu.num_counters_gp))
continue;
} }
default: default:
break; break;
@ -8714,10 +8718,6 @@ EXPORT_SYMBOL_GPL(kvm_task_switch);
static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
{ {
if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
(sregs->cr4 & X86_CR4_OSXSAVE))
return -EINVAL;
if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG)) { if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG)) {
/* /*
* When EFER.LME and CR0.PG are set, the processor is in * When EFER.LME and CR0.PG are set, the processor is in
@ -8736,7 +8736,7 @@ static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
return -EINVAL; return -EINVAL;
} }
return 0; return kvm_valid_cr4(vcpu, sregs->cr4);
} }
static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)

View File

@ -57,19 +57,7 @@ static efi_system_table_t __init *xen_efi_probe(void)
return NULL; return NULL;
/* Here we know that Xen runs on EFI platform. */ /* Here we know that Xen runs on EFI platform. */
xen_efi_runtime_setup();
efi.get_time = xen_efi_get_time;
efi.set_time = xen_efi_set_time;
efi.get_wakeup_time = xen_efi_get_wakeup_time;
efi.set_wakeup_time = xen_efi_set_wakeup_time;
efi.get_variable = xen_efi_get_variable;
efi.get_next_variable = xen_efi_get_next_variable;
efi.set_variable = xen_efi_set_variable;
efi.query_variable_info = xen_efi_query_variable_info;
efi.update_capsule = xen_efi_update_capsule;
efi.query_capsule_caps = xen_efi_query_capsule_caps;
efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
efi.reset_system = xen_efi_reset_system;
efi_systab_xen.tables = info->cfg.addr; efi_systab_xen.tables = info->cfg.addr;
efi_systab_xen.nr_tables = info->cfg.nent; efi_systab_xen.nr_tables = info->cfg.nent;

View File

@ -1992,10 +1992,14 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
/* bypass scheduler for flush rq */ /* bypass scheduler for flush rq */
blk_insert_flush(rq); blk_insert_flush(rq);
blk_mq_run_hw_queue(data.hctx, true); blk_mq_run_hw_queue(data.hctx, true);
} else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) { } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs ||
!blk_queue_nonrot(q))) {
/* /*
* Use plugging if we have a ->commit_rqs() hook as well, as * Use plugging if we have a ->commit_rqs() hook as well, as
* we know the driver uses bd->last in a smart fashion. * we know the driver uses bd->last in a smart fashion.
*
* Use normal plugging if this disk is slow HDD, as sequential
* IO may benefit a lot from plug merging.
*/ */
unsigned int request_count = plug->rq_count; unsigned int request_count = plug->rq_count;
struct request *last = NULL; struct request *last = NULL;
@ -2012,6 +2016,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
} }
blk_add_rq_to_plug(plug, rq); blk_add_rq_to_plug(plug, rq);
} else if (q->elevator) {
blk_mq_sched_insert_request(rq, false, true, true);
} else if (plug && !blk_queue_nomerges(q)) { } else if (plug && !blk_queue_nomerges(q)) {
/* /*
* We do limited plugging. If the bio can be merged, do that. * We do limited plugging. If the bio can be merged, do that.
@ -2035,8 +2041,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
blk_mq_try_issue_directly(data.hctx, same_queue_rq, blk_mq_try_issue_directly(data.hctx, same_queue_rq,
&cookie); &cookie);
} }
} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator && } else if ((q->nr_hw_queues > 1 && is_sync) ||
!data.hctx->dispatch_busy)) { !data.hctx->dispatch_busy) {
blk_mq_try_issue_directly(data.hctx, rq, &cookie); blk_mq_try_issue_directly(data.hctx, rq, &cookie);
} else { } else {
blk_mq_sched_insert_request(rq, false, true, true); blk_mq_sched_insert_request(rq, false, true, true);

View File

@ -129,7 +129,7 @@ static const u8 opaluid[][OPAL_UID_LENGTH] = {
{ 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x84, 0x01 }, { 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x84, 0x01 },
/* tables */ /* tables */
[OPAL_TABLE_TABLE] [OPAL_TABLE_TABLE] =
{ 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01 }, { 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01 },
[OPAL_LOCKINGRANGE_GLOBAL] = [OPAL_LOCKINGRANGE_GLOBAL] =
{ 0x00, 0x00, 0x08, 0x02, 0x00, 0x00, 0x00, 0x01 }, { 0x00, 0x00, 0x08, 0x02, 0x00, 0x00, 0x00, 0x01 },
@ -372,8 +372,8 @@ static void check_geometry(struct opal_dev *dev, const void *data)
{ {
const struct d0_geometry_features *geo = data; const struct d0_geometry_features *geo = data;
dev->align = geo->alignment_granularity; dev->align = be64_to_cpu(geo->alignment_granularity);
dev->lowest_lba = geo->lowest_aligned_lba; dev->lowest_lba = be64_to_cpu(geo->lowest_aligned_lba);
} }
static int execute_step(struct opal_dev *dev, static int execute_step(struct opal_dev *dev,

View File

@ -994,6 +994,16 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
if (!(lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync) if (!(lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
blk_queue_write_cache(lo->lo_queue, true, false); blk_queue_write_cache(lo->lo_queue, true, false);
if (io_is_direct(lo->lo_backing_file) && inode->i_sb->s_bdev) {
/* In case of direct I/O, match underlying block size */
unsigned short bsize = bdev_logical_block_size(
inode->i_sb->s_bdev);
blk_queue_logical_block_size(lo->lo_queue, bsize);
blk_queue_physical_block_size(lo->lo_queue, bsize);
blk_queue_io_min(lo->lo_queue, bsize);
}
loop_update_rotational(lo); loop_update_rotational(lo);
loop_update_dio(lo); loop_update_dio(lo);
set_capacity(lo->lo_disk, size); set_capacity(lo->lo_disk, size);

View File

@ -2520,4 +2520,4 @@ void add_bootloader_randomness(const void *buf, unsigned int size)
else else
add_device_randomness(buf, size); add_device_randomness(buf, size);
} }
EXPORT_SYMBOL_GPL(add_bootloader_randomness); EXPORT_SYMBOL_GPL(add_bootloader_randomness);

View File

@ -683,7 +683,7 @@ static const struct omap_clkctrl_reg_data dra7_l4per2_clkctrl_regs[] __initconst
{ DRA7_L4PER2_MCASP2_CLKCTRL, dra7_mcasp2_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0154:22" }, { DRA7_L4PER2_MCASP2_CLKCTRL, dra7_mcasp2_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0154:22" },
{ DRA7_L4PER2_MCASP3_CLKCTRL, dra7_mcasp3_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:015c:22" }, { DRA7_L4PER2_MCASP3_CLKCTRL, dra7_mcasp3_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:015c:22" },
{ DRA7_L4PER2_MCASP5_CLKCTRL, dra7_mcasp5_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:016c:22" }, { DRA7_L4PER2_MCASP5_CLKCTRL, dra7_mcasp5_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:016c:22" },
{ DRA7_L4PER2_MCASP8_CLKCTRL, dra7_mcasp8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0184:24" }, { DRA7_L4PER2_MCASP8_CLKCTRL, dra7_mcasp8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0184:22" },
{ DRA7_L4PER2_MCASP4_CLKCTRL, dra7_mcasp4_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:018c:22" }, { DRA7_L4PER2_MCASP4_CLKCTRL, dra7_mcasp4_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:018c:22" },
{ DRA7_L4PER2_UART7_CLKCTRL, dra7_uart7_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:01c4:24" }, { DRA7_L4PER2_UART7_CLKCTRL, dra7_uart7_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:01c4:24" },
{ DRA7_L4PER2_UART8_CLKCTRL, dra7_uart8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:01d4:24" }, { DRA7_L4PER2_UART8_CLKCTRL, dra7_uart8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:01d4:24" },
@ -828,8 +828,8 @@ static struct ti_dt_clk dra7xx_clks[] = {
DT_CLK(NULL, "mcasp6_aux_gfclk_mux", "l4per2-clkctrl:01f8:22"), DT_CLK(NULL, "mcasp6_aux_gfclk_mux", "l4per2-clkctrl:01f8:22"),
DT_CLK(NULL, "mcasp7_ahclkx_mux", "l4per2-clkctrl:01fc:24"), DT_CLK(NULL, "mcasp7_ahclkx_mux", "l4per2-clkctrl:01fc:24"),
DT_CLK(NULL, "mcasp7_aux_gfclk_mux", "l4per2-clkctrl:01fc:22"), DT_CLK(NULL, "mcasp7_aux_gfclk_mux", "l4per2-clkctrl:01fc:22"),
DT_CLK(NULL, "mcasp8_ahclkx_mux", "l4per2-clkctrl:0184:22"), DT_CLK(NULL, "mcasp8_ahclkx_mux", "l4per2-clkctrl:0184:24"),
DT_CLK(NULL, "mcasp8_aux_gfclk_mux", "l4per2-clkctrl:0184:24"), DT_CLK(NULL, "mcasp8_aux_gfclk_mux", "l4per2-clkctrl:0184:22"),
DT_CLK(NULL, "mmc1_clk32k", "l3init-clkctrl:0008:8"), DT_CLK(NULL, "mmc1_clk32k", "l3init-clkctrl:0008:8"),
DT_CLK(NULL, "mmc1_fclk_div", "l3init-clkctrl:0008:25"), DT_CLK(NULL, "mmc1_fclk_div", "l3init-clkctrl:0008:25"),
DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"), DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),

View File

@ -25,7 +25,9 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq)
struct clock_event_device *clkevt = &to->clkevt; struct clock_event_device *clkevt = &to->clkevt;
of_irq->percpu ? free_percpu_irq(of_irq->irq, clkevt) : if (of_irq->percpu)
free_percpu_irq(of_irq->irq, clkevt);
else
free_irq(of_irq->irq, clkevt); free_irq(of_irq->irq, clkevt);
} }

View File

@ -54,7 +54,7 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o \ amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o \
amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \ amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
amdgpu_gmc.o amdgpu_xgmi.o amdgpu_csa.o amdgpu_ras.o amdgpu_vm_cpu.o \ amdgpu_gmc.o amdgpu_xgmi.o amdgpu_csa.o amdgpu_ras.o amdgpu_vm_cpu.o \
amdgpu_vm_sdma.o amdgpu_pmu.o amdgpu_discovery.o amdgpu_ras_eeprom.o smu_v11_0_i2c.o amdgpu_vm_sdma.o amdgpu_discovery.o amdgpu_ras_eeprom.o smu_v11_0_i2c.o
amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o

View File

@ -189,7 +189,7 @@ static int acp_hw_init(void *handle)
u32 val = 0; u32 val = 0;
u32 count = 0; u32 count = 0;
struct device *dev; struct device *dev;
struct i2s_platform_data *i2s_pdata; struct i2s_platform_data *i2s_pdata = NULL;
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@ -231,20 +231,21 @@ static int acp_hw_init(void *handle)
adev->acp.acp_cell = kcalloc(ACP_DEVS, sizeof(struct mfd_cell), adev->acp.acp_cell = kcalloc(ACP_DEVS, sizeof(struct mfd_cell),
GFP_KERNEL); GFP_KERNEL);
if (adev->acp.acp_cell == NULL) if (adev->acp.acp_cell == NULL) {
return -ENOMEM; r = -ENOMEM;
goto failure;
}
adev->acp.acp_res = kcalloc(5, sizeof(struct resource), GFP_KERNEL); adev->acp.acp_res = kcalloc(5, sizeof(struct resource), GFP_KERNEL);
if (adev->acp.acp_res == NULL) { if (adev->acp.acp_res == NULL) {
kfree(adev->acp.acp_cell); r = -ENOMEM;
return -ENOMEM; goto failure;
} }
i2s_pdata = kcalloc(3, sizeof(struct i2s_platform_data), GFP_KERNEL); i2s_pdata = kcalloc(3, sizeof(struct i2s_platform_data), GFP_KERNEL);
if (i2s_pdata == NULL) { if (i2s_pdata == NULL) {
kfree(adev->acp.acp_res); r = -ENOMEM;
kfree(adev->acp.acp_cell); goto failure;
return -ENOMEM;
} }
switch (adev->asic_type) { switch (adev->asic_type) {
@ -341,14 +342,14 @@ static int acp_hw_init(void *handle)
r = mfd_add_hotplug_devices(adev->acp.parent, adev->acp.acp_cell, r = mfd_add_hotplug_devices(adev->acp.parent, adev->acp.acp_cell,
ACP_DEVS); ACP_DEVS);
if (r) if (r)
return r; goto failure;
for (i = 0; i < ACP_DEVS ; i++) { for (i = 0; i < ACP_DEVS ; i++) {
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i); dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev); r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
if (r) { if (r) {
dev_err(dev, "Failed to add dev to genpd\n"); dev_err(dev, "Failed to add dev to genpd\n");
return r; goto failure;
} }
} }
@ -367,7 +368,8 @@ static int acp_hw_init(void *handle)
break; break;
if (--count == 0) { if (--count == 0) {
dev_err(&adev->pdev->dev, "Failed to reset ACP\n"); dev_err(&adev->pdev->dev, "Failed to reset ACP\n");
return -ETIMEDOUT; r = -ETIMEDOUT;
goto failure;
} }
udelay(100); udelay(100);
} }
@ -384,7 +386,8 @@ static int acp_hw_init(void *handle)
break; break;
if (--count == 0) { if (--count == 0) {
dev_err(&adev->pdev->dev, "Failed to reset ACP\n"); dev_err(&adev->pdev->dev, "Failed to reset ACP\n");
return -ETIMEDOUT; r = -ETIMEDOUT;
goto failure;
} }
udelay(100); udelay(100);
} }
@ -393,6 +396,13 @@ static int acp_hw_init(void *handle)
val &= ~ACP_SOFT_RESET__SoftResetAud_MASK; val &= ~ACP_SOFT_RESET__SoftResetAud_MASK;
cgs_write_register(adev->acp.cgs_device, mmACP_SOFT_RESET, val); cgs_write_register(adev->acp.cgs_device, mmACP_SOFT_RESET, val);
return 0; return 0;
failure:
kfree(i2s_pdata);
kfree(adev->acp.acp_res);
kfree(adev->acp.acp_cell);
kfree(adev->acp.acp_genpd);
return r;
} }
/** /**

View File

@ -81,9 +81,10 @@
* - 3.32.0 - Add syncobj timeline support to AMDGPU_CS. * - 3.32.0 - Add syncobj timeline support to AMDGPU_CS.
* - 3.33.0 - Fixes for GDS ENOMEM failures in AMDGPU_CS. * - 3.33.0 - Fixes for GDS ENOMEM failures in AMDGPU_CS.
* - 3.34.0 - Non-DC can flip correctly between buffers with different pitches * - 3.34.0 - Non-DC can flip correctly between buffers with different pitches
* - 3.35.0 - Add drm_amdgpu_info_device::tcc_disabled_mask
*/ */
#define KMS_DRIVER_MAJOR 3 #define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 34 #define KMS_DRIVER_MINOR 35
#define KMS_DRIVER_PATCHLEVEL 0 #define KMS_DRIVER_PATCHLEVEL 0
#define AMDGPU_MAX_TIMEOUT_PARAM_LENTH 256 #define AMDGPU_MAX_TIMEOUT_PARAM_LENTH 256

View File

@ -165,6 +165,7 @@ struct amdgpu_gfx_config {
uint32_t num_sc_per_sh; uint32_t num_sc_per_sh;
uint32_t num_packer_per_sc; uint32_t num_packer_per_sc;
uint32_t pa_sc_tile_steering_override; uint32_t pa_sc_tile_steering_override;
uint64_t tcc_disabled_mask;
}; };
struct amdgpu_cu_info { struct amdgpu_cu_info {

View File

@ -787,6 +787,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
dev_info.pa_sc_tile_steering_override = dev_info.pa_sc_tile_steering_override =
adev->gfx.config.pa_sc_tile_steering_override; adev->gfx.config.pa_sc_tile_steering_override;
dev_info.tcc_disabled_mask = adev->gfx.config.tcc_disabled_mask;
return copy_to_user(out, &dev_info, return copy_to_user(out, &dev_info,
min((size_t)size, sizeof(dev_info))) ? -EFAULT : 0; min((size_t)size, sizeof(dev_info))) ? -EFAULT : 0;
} }

View File

@ -603,14 +603,12 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
struct ttm_bo_global *glob = adev->mman.bdev.glob; struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base; struct amdgpu_vm_bo_base *bo_base;
#if 0
if (vm->bulk_moveable) { if (vm->bulk_moveable) {
spin_lock(&glob->lru_lock); spin_lock(&glob->lru_lock);
ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move); ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
spin_unlock(&glob->lru_lock); spin_unlock(&glob->lru_lock);
return; return;
} }
#endif
memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move)); memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));

View File

@ -1691,6 +1691,17 @@ static void gfx_v10_0_tcp_harvest(struct amdgpu_device *adev)
} }
} }
static void gfx_v10_0_get_tcc_info(struct amdgpu_device *adev)
{
/* TCCs are global (not instanced). */
uint32_t tcc_disable = RREG32_SOC15(GC, 0, mmCGTS_TCC_DISABLE) |
RREG32_SOC15(GC, 0, mmCGTS_USER_TCC_DISABLE);
adev->gfx.config.tcc_disabled_mask =
REG_GET_FIELD(tcc_disable, CGTS_TCC_DISABLE, TCC_DISABLE) |
(REG_GET_FIELD(tcc_disable, CGTS_TCC_DISABLE, HI_TCC_DISABLE) << 16);
}
static void gfx_v10_0_constants_init(struct amdgpu_device *adev) static void gfx_v10_0_constants_init(struct amdgpu_device *adev)
{ {
u32 tmp; u32 tmp;
@ -1702,6 +1713,7 @@ static void gfx_v10_0_constants_init(struct amdgpu_device *adev)
gfx_v10_0_setup_rb(adev); gfx_v10_0_setup_rb(adev);
gfx_v10_0_get_cu_info(adev, &adev->gfx.cu_info); gfx_v10_0_get_cu_info(adev, &adev->gfx.cu_info);
gfx_v10_0_get_tcc_info(adev);
adev->gfx.config.pa_sc_tile_steering_override = adev->gfx.config.pa_sc_tile_steering_override =
gfx_v10_0_init_pa_sc_tile_steering_override(adev); gfx_v10_0_init_pa_sc_tile_steering_override(adev);

View File

@ -317,10 +317,12 @@ static int nv_asic_reset(struct amdgpu_device *adev)
struct smu_context *smu = &adev->smu; struct smu_context *smu = &adev->smu;
if (nv_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) { if (nv_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
amdgpu_inc_vram_lost(adev); if (!adev->in_suspend)
amdgpu_inc_vram_lost(adev);
ret = smu_baco_reset(smu); ret = smu_baco_reset(smu);
} else { } else {
amdgpu_inc_vram_lost(adev); if (!adev->in_suspend)
amdgpu_inc_vram_lost(adev);
ret = nv_asic_mode1_reset(adev); ret = nv_asic_mode1_reset(adev);
} }

View File

@ -558,12 +558,14 @@ static int soc15_asic_reset(struct amdgpu_device *adev)
{ {
switch (soc15_asic_reset_method(adev)) { switch (soc15_asic_reset_method(adev)) {
case AMD_RESET_METHOD_BACO: case AMD_RESET_METHOD_BACO:
amdgpu_inc_vram_lost(adev); if (!adev->in_suspend)
amdgpu_inc_vram_lost(adev);
return soc15_asic_baco_reset(adev); return soc15_asic_baco_reset(adev);
case AMD_RESET_METHOD_MODE2: case AMD_RESET_METHOD_MODE2:
return soc15_mode2_reset(adev); return soc15_mode2_reset(adev);
default: default:
amdgpu_inc_vram_lost(adev); if (!adev->in_suspend)
amdgpu_inc_vram_lost(adev);
return soc15_asic_mode1_reset(adev); return soc15_asic_mode1_reset(adev);
} }
} }
@ -771,8 +773,6 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
#if defined(CONFIG_DRM_AMD_DC) #if defined(CONFIG_DRM_AMD_DC)
else if (amdgpu_device_has_dc_support(adev)) else if (amdgpu_device_has_dc_support(adev))
amdgpu_device_ip_block_add(adev, &dm_ip_block); amdgpu_device_ip_block_add(adev, &dm_ip_block);
#else
# warning "Enable CONFIG_DRM_AMD_DC for display support on SOC15."
#endif #endif
amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block); amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block);
break; break;

View File

@ -2385,8 +2385,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
if (adev->asic_type != CHIP_CARRIZO && adev->asic_type != CHIP_STONEY) if (adev->asic_type != CHIP_CARRIZO && adev->asic_type != CHIP_STONEY)
dm->dc->debug.disable_stutter = amdgpu_pp_feature_mask & PP_STUTTER_MODE ? false : true; dm->dc->debug.disable_stutter = amdgpu_pp_feature_mask & PP_STUTTER_MODE ? false : true;
if (adev->asic_type == CHIP_RENOIR)
dm->dc->debug.disable_stutter = true;
return 0; return 0;
fail: fail:
@ -6019,7 +6017,9 @@ static void amdgpu_dm_enable_crtc_interrupts(struct drm_device *dev,
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state, *new_crtc_state; struct drm_crtc_state *old_crtc_state, *new_crtc_state;
int i; int i;
#ifdef CONFIG_DEBUG_FS
enum amdgpu_dm_pipe_crc_source source; enum amdgpu_dm_pipe_crc_source source;
#endif
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
new_crtc_state, i) { new_crtc_state, i) {

View File

@ -668,6 +668,7 @@ struct clock_source *dce100_clock_source_create(
return &clk_src->base; return &clk_src->base;
} }
kfree(clk_src);
BREAK_TO_DEBUGGER(); BREAK_TO_DEBUGGER();
return NULL; return NULL;
} }

View File

@ -714,6 +714,7 @@ struct clock_source *dce110_clock_source_create(
return &clk_src->base; return &clk_src->base;
} }
kfree(clk_src);
BREAK_TO_DEBUGGER(); BREAK_TO_DEBUGGER();
return NULL; return NULL;
} }

View File

@ -687,6 +687,7 @@ struct clock_source *dce112_clock_source_create(
return &clk_src->base; return &clk_src->base;
} }
kfree(clk_src);
BREAK_TO_DEBUGGER(); BREAK_TO_DEBUGGER();
return NULL; return NULL;
} }

View File

@ -500,6 +500,7 @@ static struct clock_source *dce120_clock_source_create(
return &clk_src->base; return &clk_src->base;
} }
kfree(clk_src);
BREAK_TO_DEBUGGER(); BREAK_TO_DEBUGGER();
return NULL; return NULL;
} }

View File

@ -701,6 +701,7 @@ struct clock_source *dce80_clock_source_create(
return &clk_src->base; return &clk_src->base;
} }
kfree(clk_src);
BREAK_TO_DEBUGGER(); BREAK_TO_DEBUGGER();
return NULL; return NULL;
} }

Some files were not shown because too many files have changed in this diff Show More