Linux 5.16-rc2

-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmGavnseHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGcl4H/jyFVlHDSa+utMA5
 7PEQX0AarkBtSvKUgK/SivZxX06nYp2UU5L4Jn70O/mccXWo0ru82eDVO3nSImDR
 Mi668IqzbYfGqVL6CMztDku+XbyT3Yr/i9QILFbLWV5DhCM422GXXN8PFBibDHdI
 6Oyt1WoUh404yjVIHOCNwprfLObxREV6ARhFsIsmCRa8Hf+RkKOY5Twua6j5emm5
 aamiq6SYLtf2H5+DwkR5TnPkie6I2o8oLtA7JYiJpKh5KK75qjlpzFd3S3OWsi1H
 0g752g12r7tLh4ac3Xfgwf36pQ2CdiZ7NUOkJhZWT4aHPqPh+MVheQfpR41f5Sgc
 pvFslTo=
 =QdMf
 -----END PGP SIGNATURE-----

Merge tag 'v5.16-rc2' into devel

Linux 5.16-rc2 is needed because nonurgent fixes headed
for next are strongly textually dependent on a fix that
was applied for rc2.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
This commit is contained in:
Linus Walleij 2021-11-27 00:54:16 +01:00
commit 2448eab440
377 changed files with 3368 additions and 2001 deletions

View File

@ -71,6 +71,9 @@ Chao Yu <chao@kernel.org> <chao2.yu@samsung.com>
Chao Yu <chao@kernel.org> <yuchao0@huawei.com>
Chris Chiu <chris.chiu@canonical.com> <chiu@endlessm.com>
Chris Chiu <chris.chiu@canonical.com> <chiu@endlessos.org>
Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
Christian Borntraeger <borntraeger@linux.ibm.com> <cborntra@de.ibm.com>
Christian Borntraeger <borntraeger@linux.ibm.com> <borntrae@de.ibm.com>
Christophe Ricard <christophe.ricard@gmail.com>
Christoph Hellwig <hch@lst.de>
Colin Ian King <colin.king@intel.com> <colin.king@canonical.com>

View File

@ -1520,15 +1520,15 @@ This sysfs attribute controls the keyboard "face" that will be shown on the
Lenovo X1 Carbon 2nd gen (2014)'s adaptive keyboard. The value can be read
and set.
- 1 = Home mode
- 2 = Web-browser mode
- 3 = Web-conference mode
- 4 = Function mode
- 5 = Layflat mode
- 0 = Home mode
- 1 = Web-browser mode
- 2 = Web-conference mode
- 3 = Function mode
- 4 = Layflat mode
For more details about which buttons will appear depending on the mode, please
review the laptop's user guide:
http://www.lenovo.com/shop/americas/content/user_guides/x1carbon_2_ug_en.pdf
https://download.lenovo.com/ibmdl/pub/pc/pccbbs/mobiles_pdf/x1carbon_2_ug_en.pdf
Battery charge control
----------------------

View File

@ -1099,7 +1099,7 @@ task_delayacct
===============
Enables/disables task delay accounting (see
:doc:`accounting/delay-accounting.rst`). Enabling this feature incurs
Documentation/accounting/delay-accounting.rst. Enabling this feature incurs
a small amount of overhead in the scheduler but is useful for debugging
and performance tuning. It is required by some tools such as iotop.

View File

@ -104,6 +104,8 @@ Discovery family
Not supported by the Linux kernel.
Homepage:
https://web.archive.org/web/20110924171043/http://www.marvell.com/embedded-processors/discovery-innovation/
Core:
Feroceon 88fr571-vd ARMv5 compatible
@ -120,6 +122,7 @@ EBU Armada family
- 88F6707
- 88F6W11
- Product infos: https://web.archive.org/web/20141002083258/http://www.marvell.com/embedded-processors/armada-370/
- Product Brief: https://web.archive.org/web/20121115063038/http://www.marvell.com/embedded-processors/armada-300/assets/Marvell_ARMADA_370_SoC.pdf
- Hardware Spec: https://web.archive.org/web/20140617183747/http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA370-datasheet.pdf
- Functional Spec: https://web.archive.org/web/20140617183701/http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA370-FunctionalSpec-datasheet.pdf
@ -127,9 +130,29 @@ EBU Armada family
Core:
Sheeva ARMv7 compatible PJ4B
Armada XP Flavors:
- MV78230
- MV78260
- MV78460
NOTE:
not to be confused with the non-SMP 78xx0 SoCs
- Product infos: https://web.archive.org/web/20150101215721/http://www.marvell.com/embedded-processors/armada-xp/
- Product Brief: https://web.archive.org/web/20121021173528/http://www.marvell.com/embedded-processors/armada-xp/assets/Marvell-ArmadaXP-SoC-product%20brief.pdf
- Functional Spec: https://web.archive.org/web/20180829171131/http://www.marvell.com/embedded-processors/armada-xp/assets/ARMADA-XP-Functional-SpecDatasheet.pdf
- Hardware Specs:
- https://web.archive.org/web/20141127013651/http://www.marvell.com/embedded-processors/armada-xp/assets/HW_MV78230_OS.PDF
- https://web.archive.org/web/20141222000224/http://www.marvell.com/embedded-processors/armada-xp/assets/HW_MV78260_OS.PDF
- https://web.archive.org/web/20141222000230/http://www.marvell.com/embedded-processors/armada-xp/assets/HW_MV78460_OS.PDF
Core:
Sheeva ARMv7 compatible Dual-core or Quad-core PJ4B-MP
Armada 375 Flavors:
- 88F6720
- Product infos: https://web.archive.org/web/20140108032402/http://www.marvell.com/embedded-processors/armada-375/
- Product Brief: https://web.archive.org/web/20131216023516/http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA_375_SoC-01_product_brief.pdf
Core:
@ -162,29 +185,6 @@ EBU Armada family
Core:
ARM Cortex-A9
Armada XP Flavors:
- MV78230
- MV78260
- MV78460
NOTE:
not to be confused with the non-SMP 78xx0 SoCs
Product Brief:
https://web.archive.org/web/20121021173528/http://www.marvell.com/embedded-processors/armada-xp/assets/Marvell-ArmadaXP-SoC-product%20brief.pdf
Functional Spec:
https://web.archive.org/web/20180829171131/http://www.marvell.com/embedded-processors/armada-xp/assets/ARMADA-XP-Functional-SpecDatasheet.pdf
- Hardware Specs:
- https://web.archive.org/web/20141127013651/http://www.marvell.com/embedded-processors/armada-xp/assets/HW_MV78230_OS.PDF
- https://web.archive.org/web/20141222000224/http://www.marvell.com/embedded-processors/armada-xp/assets/HW_MV78260_OS.PDF
- https://web.archive.org/web/20141222000230/http://www.marvell.com/embedded-processors/armada-xp/assets/HW_MV78460_OS.PDF
Core:
Sheeva ARMv7 compatible Dual-core or Quad-core PJ4B-MP
Linux kernel mach directory:
arch/arm/mach-mvebu
Linux kernel plat directory:
@ -436,7 +436,7 @@ Berlin family (Multimedia Solutions)
- Flavors:
- 88DE3010, Armada 1000 (no Linux support)
- Core: Marvell PJ1 (ARMv5TE), Dual-core
- Product Brief: http://www.marvell.com.cn/digital-entertainment/assets/armada_1000_pb.pdf
- Product Brief: https://web.archive.org/web/20131103162620/http://www.marvell.com/digital-entertainment/assets/armada_1000_pb.pdf
- 88DE3005, Armada 1500 Mini
- Design name: BG2CD
- Core: ARM Cortex-A9, PL310 L2CC

View File

@ -15,7 +15,7 @@ that goes into great technical depth about the BPF Architecture.
libbpf
======
Documentation/bpf/libbpf/libbpf.rst is a userspace library for loading and interacting with bpf programs.
Documentation/bpf/libbpf/index.rst is a userspace library for loading and interacting with bpf programs.
BPF Type Format (BTF)
=====================

View File

@ -27,7 +27,7 @@ Sphinx Install
==============
The ReST markups currently used by the Documentation/ files are meant to be
built with ``Sphinx`` version 1.3 or higher.
built with ``Sphinx`` version 1.7 or higher.
There's a script that checks for the Sphinx requirements. Please see
:ref:`sphinx-pre-install` for further details.
@ -43,10 +43,6 @@ or ``virtualenv``, depending on how your distribution packaged Python 3.
.. note::
#) Sphinx versions below 1.5 don't work properly with Python's
docutils version 0.13.1 or higher. So, if you're willing to use
those versions, you should run ``pip install 'docutils==0.12'``.
#) It is recommended to use the RTD theme for html output. Depending
on the Sphinx version, it should be installed separately,
with ``pip install sphinx_rtd_theme``.
@ -55,13 +51,13 @@ or ``virtualenv``, depending on how your distribution packaged Python 3.
those expressions are written using LaTeX notation. It needs texlive
installed with amsfonts and amsmath in order to evaluate them.
In summary, if you want to install Sphinx version 1.7.9, you should do::
In summary, if you want to install Sphinx version 2.4.4, you should do::
$ virtualenv sphinx_1.7.9
$ . sphinx_1.7.9/bin/activate
(sphinx_1.7.9) $ pip install -r Documentation/sphinx/requirements.txt
$ virtualenv sphinx_2.4.4
$ . sphinx_2.4.4/bin/activate
(sphinx_2.4.4) $ pip install -r Documentation/sphinx/requirements.txt
After running ``. sphinx_1.7.9/bin/activate``, the prompt will change,
After running ``. sphinx_2.4.4/bin/activate``, the prompt will change,
in order to indicate that you're using the new environment. If you
open a new shell, you need to rerun this command to enter again at
the virtual environment before building the documentation.
@ -81,7 +77,7 @@ output.
PDF and LaTeX builds
--------------------
Such builds are currently supported only with Sphinx versions 1.4 and higher.
Such builds are currently supported only with Sphinx versions 2.4 and higher.
For PDF and LaTeX output, you'll also need ``XeLaTeX`` version 3.14159265.
@ -104,8 +100,8 @@ command line options for your distro::
You should run:
sudo dnf install -y texlive-luatex85
/usr/bin/virtualenv sphinx_1.7.9
. sphinx_1.7.9/bin/activate
/usr/bin/virtualenv sphinx_2.4.4
. sphinx_2.4.4/bin/activate
pip install -r Documentation/sphinx/requirements.txt
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.

View File

@ -35,7 +35,7 @@ This document describes only the kernel module and the interactions
required with any user-space program. Subsequent text refers to this
as the "automount daemon" or simply "the daemon".
"autofs" is a Linux kernel module with provides the "autofs"
"autofs" is a Linux kernel module which provides the "autofs"
filesystem type. Several "autofs" filesystems can be mounted and they
can each be managed separately, or all managed by the same daemon.

View File

@ -84,6 +84,16 @@ CONFIG_ENERGY_MODEL must be enabled to use the EM framework.
2.2 Registration of performance domains
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Registration of 'advanced' EM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The 'advanced' EM gets it's name due to the fact that the driver is allowed
to provide more precised power model. It's not limited to some implemented math
formula in the framework (like it's in 'simple' EM case). It can better reflect
the real power measurements performed for each performance state. Thus, this
registration method should be preferred in case considering EM static power
(leakage) is important.
Drivers are expected to register performance domains into the EM framework by
calling the following API::
@ -103,6 +113,18 @@ to: return warning/error, stop working or panic.
See Section 3. for an example of driver implementing this
callback, or Section 2.4 for further documentation on this API
Registration of 'simple' EM
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The 'simple' EM is registered using the framework helper function
cpufreq_register_em_with_opp(). It implements a power model which is tight to
math formula::
Power = C * V^2 * f
The EM which is registered using this method might not reflect correctly the
physics of a real device, e.g. when static power (leakage) is important.
2.3 Accessing performance domains
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -138,6 +160,10 @@ or in Section 2.4
3. Example driver
-----------------
The CPUFreq framework supports dedicated callback for registering
the EM for a given CPU(s) 'policy' object: cpufreq_driver::register_em().
That callback has to be implemented properly for a given driver,
because the framework would call it at the right time during setup.
This section provides a simple example of a CPUFreq driver registering a
performance domain in the Energy Model framework using the (fake) 'foo'
protocol. The driver implements an est_power() function to be provided to the
@ -167,25 +193,22 @@ EM framework::
20 return 0;
21 }
22
23 static int foo_cpufreq_init(struct cpufreq_policy *policy)
23 static void foo_cpufreq_register_em(struct cpufreq_policy *policy)
24 {
25 struct em_data_callback em_cb = EM_DATA_CB(est_power);
26 struct device *cpu_dev;
27 int nr_opp, ret;
27 int nr_opp;
28
29 cpu_dev = get_cpu_device(cpumask_first(policy->cpus));
30
31 /* Do the actual CPUFreq init work ... */
32 ret = do_foo_cpufreq_init(policy);
33 if (ret)
34 return ret;
35
36 /* Find the number of OPPs for this policy */
37 nr_opp = foo_get_nr_opp(policy);
31 /* Find the number of OPPs for this policy */
32 nr_opp = foo_get_nr_opp(policy);
33
34 /* And register the new performance domain */
35 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
36 true);
37 }
38
39 /* And register the new performance domain */
40 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
41 true);
42
43 return 0;
44 }
39 static struct cpufreq_driver foo_cpufreq_driver = {
40 .register_em = foo_cpufreq_register_em,
41 };

View File

@ -54,7 +54,7 @@ mcelog 0.6 mcelog --version
iptables 1.4.2 iptables -V
openssl & libcrypto 1.0.0 openssl version
bc 1.06.95 bc --version
Sphinx\ [#f1]_ 1.3 sphinx-build --version
Sphinx\ [#f1]_ 1.7 sphinx-build --version
====================== =============== ========================================
.. [#f1] Sphinx is needed only to build the Kernel documentation

View File

@ -22,8 +22,8 @@ use it, it will make your life as a kernel developer and in general much
easier.
Some subsystems and maintainer trees have additional information about
their workflow and expectations, see :ref:`Documentation/process/maintainer
handbooks <maintainer_handbooks_main>`.
their workflow and expectations, see
:ref:`Documentation/process/maintainer-handbooks.rst <maintainer_handbooks_main>`.
Obtain a current source tree
----------------------------

View File

@ -2442,11 +2442,10 @@ Or this simple script!
#!/bin/bash
tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts`
echo nop > $tracefs/tracing/current_tracer
echo 0 > $tracefs/tracing/tracing_on
echo $$ > $tracefs/tracing/set_ftrace_pid
echo function > $tracefs/tracing/current_tracer
echo 1 > $tracefs/tracing/tracing_on
echo 0 > $tracefs/tracing_on
echo $$ > $tracefs/set_ftrace_pid
echo function > $tracefs/current_tracer
echo 1 > $tracefs/tracing_on
exec "$@"

View File

@ -35,7 +35,7 @@ Installazione Sphinx
====================
I marcatori ReST utilizzati nei file in Documentation/ sono pensati per essere
processati da ``Sphinx`` nella versione 1.3 o superiore.
processati da ``Sphinx`` nella versione 1.7 o superiore.
Esiste uno script che verifica i requisiti Sphinx. Per ulteriori dettagli
consultate :ref:`it_sphinx-pre-install`.
@ -53,11 +53,6 @@ pacchettizzato dalla vostra distribuzione.
.. note::
#) Le versioni di Sphinx inferiori alla 1.5 non funzionano bene
con il pacchetto Python docutils versione 0.13.1 o superiore.
Se volete usare queste versioni, allora dovere eseguire
``pip install 'docutils==0.12'``.
#) Viene raccomandato l'uso del tema RTD per la documentazione in HTML.
A seconda della versione di Sphinx, potrebbe essere necessaria
l'installazione tramite il comando ``pip install sphinx_rtd_theme``.
@ -67,13 +62,13 @@ pacchettizzato dalla vostra distribuzione.
utilizzando LaTeX. Per una corretta interpretazione, è necessario aver
installato texlive con i pacchetti amdfonts e amsmath.
Riassumendo, se volete installare la versione 1.7.9 di Sphinx dovete eseguire::
Riassumendo, se volete installare la versione 2.4.4 di Sphinx dovete eseguire::
$ virtualenv sphinx_1.7.9
$ . sphinx_1.7.9/bin/activate
(sphinx_1.7.9) $ pip install -r Documentation/sphinx/requirements.txt
$ virtualenv sphinx_2.4.4
$ . sphinx_2.4.4/bin/activate
(sphinx_2.4.4) $ pip install -r Documentation/sphinx/requirements.txt
Dopo aver eseguito ``. sphinx_1.7.9/bin/activate``, il prompt cambierà per
Dopo aver eseguito ``. sphinx_2.4.4/bin/activate``, il prompt cambierà per
indicare che state usando il nuovo ambiente. Se aprite un nuova sessione,
prima di generare la documentazione, dovrete rieseguire questo comando per
rientrare nell'ambiente virtuale.
@ -94,7 +89,7 @@ Generazione in PDF e LaTeX
--------------------------
Al momento, la generazione di questi documenti è supportata solo dalle
versioni di Sphinx superiori alla 1.4.
versioni di Sphinx superiori alla 2.4.
Per la generazione di PDF e LaTeX, avrete bisogno anche del pacchetto
``XeLaTeX`` nella versione 3.14159265
@ -119,8 +114,8 @@ l'installazione::
You should run:
sudo dnf install -y texlive-luatex85
/usr/bin/virtualenv sphinx_1.7.9
. sphinx_1.7.9/bin/activate
/usr/bin/virtualenv sphinx_2.4.4
. sphinx_2.4.4/bin/activate
pip install -r Documentation/sphinx/requirements.txt
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.

View File

@ -57,7 +57,7 @@ mcelog 0.6 mcelog --version
iptables 1.4.2 iptables -V
openssl & libcrypto 1.0.0 openssl version
bc 1.06.95 bc --version
Sphinx\ [#f1]_ 1.3 sphinx-build --version
Sphinx\ [#f1]_ 1.7 sphinx-build --version
====================== ================= ========================================
.. [#f1] Sphinx è necessario solo per produrre la documentazione del Kernel

View File

@ -26,7 +26,7 @@ reStructuredText文件可能包含包含来自源文件的结构化文档注释
安装Sphinx
==========
Documentation/ 下的ReST文件现在使用sphinx1.3或更高版本构建。
Documentation/ 下的ReST文件现在使用sphinx1.7或更高版本构建。
这有一个脚本可以检查Sphinx的依赖项。更多详细信息见
:ref:`sphinx-pre-install_zh`
@ -40,22 +40,19 @@ Documentation/ 下的ReST文件现在使用sphinx1.3或更高版本构建。
.. note::
#) 低于1.5版本的Sphinx无法与Python的0.13.1或更高版本docutils一起正常工作。
如果您想使用这些版本,那么应该运行 ``pip install 'docutils==0.12'``
#) html输出建议使用RTD主题。根据Sphinx版本的不同它应该用
``pip install sphinx_rtd_theme`` 单独安装。
#) 一些ReST页面包含数学表达式。由于Sphinx的工作方式这些表达式是使用 LaTeX
编写的。它需要安装amsfonts和amsmath宏包以便显示。
总之如您要安装Sphinx 1.7.9版本,应执行::
总之如您要安装Sphinx 2.4.4版本,应执行::
$ virtualenv sphinx_1.7.9
$ . sphinx_1.7.9/bin/activate
(sphinx_1.7.9) $ pip install -r Documentation/sphinx/requirements.txt
$ virtualenv sphinx_2.4.4
$ . sphinx_2.4.4/bin/activate
(sphinx_2.4.4) $ pip install -r Documentation/sphinx/requirements.txt
在运行 ``. sphinx_1.7.9/bin/activate`` 之后,提示符将变化,以指示您正在使用新
在运行 ``. sphinx_2.4.4/bin/activate`` 之后,提示符将变化,以指示您正在使用新
环境。如果您打开了一个新的shell那么在构建文档之前您需要重新运行此命令以再
次进入虚拟环境中。
@ -71,7 +68,7 @@ Documentation/ 下的ReST文件现在使用sphinx1.3或更高版本构建。
PDF和LaTeX构建
--------------
目前只有Sphinx 1.4及更高版本才支持这种构建。
目前只有Sphinx 2.4及更高版本才支持这种构建。
对于PDF和LaTeX输出还需要 ``XeLaTeX`` 3.14159265版本。(译注:此版本号真实
存在)
@ -93,8 +90,8 @@ PDF和LaTeX构建
You should run:
sudo dnf install -y texlive-luatex85
/usr/bin/virtualenv sphinx_1.7.9
. sphinx_1.7.9/bin/activate
/usr/bin/virtualenv sphinx_2.4.4
. sphinx_2.4.4/bin/activate
pip install -r Documentation/sphinx/requirements.txt
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.

View File

@ -36,14 +36,14 @@ Linux内核管理风格
每个人都认为管理者做决定,而且决策很重要。决定越大越痛苦,管理者就必须越高级。
这很明显,但事实并非如此。
游戏的名字**避免** 做出决定。尤其是如果有人告诉你“选择ab
最重要的**避免** 做出决定。尤其是如果有人告诉你“选择ab
我们真的需要你来做决定”,你就是陷入麻烦的管理者。你管理的人比你更了解细节,
所以如果他们来找你做技术决策,你完蛋了。你显然没有能力为他们做这个决定。
(推论:如果你管理的人不比你更了解细节,你也会被搞砸,尽管原因完全不同。
也就是说,你的工作是错的,他们应该管理你的才智)
所以游戏的名字**避免** 做出决定,至少是那些大而痛苦的决定。做一些小的
所以最重要的**避免** 做出决定,至少是那些大而痛苦的决定。做一些小的
和非结果性的决定是很好的,并且使您看起来好像知道自己在做什么,所以内核管理者
需要做的是将那些大的和痛苦的决定变成那些没有人真正关心的小事情。

View File

@ -3733,7 +3733,7 @@ F: drivers/scsi/bnx2i/
BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER
M: Ariel Elior <aelior@marvell.com>
M: Sudarsana Kalluru <skalluru@marvell.com>
M: GR-everest-linux-l2@marvell.com
M: Manish Chopra <manishc@marvell.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/broadcom/bnx2x/
@ -10445,7 +10445,7 @@ F: arch/riscv/include/uapi/asm/kvm*
F: arch/riscv/kvm/
KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Christian Borntraeger <borntraeger@linux.ibm.com>
M: Janosch Frank <frankja@linux.ibm.com>
R: David Hildenbrand <david@redhat.com>
R: Claudio Imbrenda <imbrenda@linux.ibm.com>
@ -15593,7 +15593,7 @@ F: drivers/scsi/qedi/
QLOGIC QL4xxx ETHERNET DRIVER
M: Ariel Elior <aelior@marvell.com>
M: GR-everest-linux-l2@marvell.com
M: Manish Chopra <manishc@marvell.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/qlogic/qed/
@ -16573,7 +16573,7 @@ F: drivers/video/fbdev/savage/
S390
M: Heiko Carstens <hca@linux.ibm.com>
M: Vasily Gorbik <gor@linux.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Christian Borntraeger <borntraeger@linux.ibm.com>
R: Alexander Gordeev <agordeev@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
@ -20317,7 +20317,8 @@ F: arch/x86/include/asm/vmware.h
F: arch/x86/kernel/cpu/vmware.c
VMWARE PVRDMA DRIVER
M: Adit Ranadive <aditr@vmware.com>
M: Bryan Tan <bryantan@vmware.com>
M: Vishnu Dasa <vdasa@vmware.com>
M: VMware PV-Drivers <pv-drivers@vmware.com>
L: linux-rdma@vger.kernel.org
S: Maintained

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 16
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Trick or Treat
# *DOCUMENTATION*

View File

@ -1463,6 +1463,7 @@ config HIGHMEM
bool "High Memory Support"
depends on MMU
select KMAP_LOCAL
select KMAP_LOCAL_NON_LINEAR_PTE_ARRAY
help
The address space of ARM processors is only 4 Gigabytes large
and it has to accommodate user address space, kernel address

View File

@ -223,7 +223,14 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = 1;
break;
case KVM_CAP_NR_VCPUS:
r = num_online_cpus();
/*
* ARM64 treats KVM_CAP_NR_CPUS differently from all other
* architectures, as it does not always bound it to
* KVM_CAP_MAX_VCPUS. It should not matter much because
* this is just an advisory value.
*/
r = min_t(unsigned int, num_online_cpus(),
kvm_arm_default_max_vcpus());
break;
case KVM_CAP_MAX_VCPUS:
case KVM_CAP_MAX_VCPU_ID:

View File

@ -1,26 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Timer support for Hexagon
*
* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.
*/
#ifndef _ASM_TIMER_REGS_H
#define _ASM_TIMER_REGS_H
/* This stuff should go into a platform specific file */
#define TCX0_CLK_RATE 19200
#define TIMER_ENABLE 0
#define TIMER_CLR_ON_MATCH 1
/*
* 8x50 HDD Specs 5-8. Simulator co-sim not fixed until
* release 1.1, and then it's "adjustable" and probably not defaulted.
*/
#define RTOS_TIMER_INT 3
#ifdef CONFIG_HEXAGON_COMET
#define RTOS_TIMER_REGS_ADDR 0xAB000000UL
#endif
#define SLEEP_CLK_RATE 32000
#endif

View File

@ -7,11 +7,10 @@
#define _ASM_TIMEX_H
#include <asm-generic/timex.h>
#include <asm/timer-regs.h>
#include <asm/hexagon_vm.h>
/* Using TCX0 as our clock. CLOCK_TICK_RATE scheduled to be removed. */
#define CLOCK_TICK_RATE TCX0_CLK_RATE
#define CLOCK_TICK_RATE 19200
#define ARCH_HAS_READ_CURRENT_TIMER

1
arch/hexagon/kernel/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
vmlinux.lds

View File

@ -17,9 +17,10 @@
#include <linux/of_irq.h>
#include <linux/module.h>
#include <asm/timer-regs.h>
#include <asm/hexagon_vm.h>
#define TIMER_ENABLE BIT(0)
/*
* For the clocksource we need:
* pcycle frequency (600MHz)
@ -33,6 +34,13 @@ cycles_t pcycle_freq_mhz;
cycles_t thread_freq_mhz;
cycles_t sleep_clk_freq;
/*
* 8x50 HDD Specs 5-8. Simulator co-sim not fixed until
* release 1.1, and then it's "adjustable" and probably not defaulted.
*/
#define RTOS_TIMER_INT 3
#define RTOS_TIMER_REGS_ADDR 0xAB000000UL
static struct resource rtos_timer_resources[] = {
{
.start = RTOS_TIMER_REGS_ADDR,
@ -80,7 +88,7 @@ static int set_next_event(unsigned long delta, struct clock_event_device *evt)
iowrite32(0, &rtos_timer->clear);
iowrite32(delta, &rtos_timer->match);
iowrite32(1 << TIMER_ENABLE, &rtos_timer->enable);
iowrite32(TIMER_ENABLE, &rtos_timer->enable);
return 0;
}

View File

@ -27,6 +27,7 @@ void __raw_readsw(const void __iomem *addr, void *data, int len)
*dst++ = *src;
}
EXPORT_SYMBOL(__raw_readsw);
/*
* __raw_writesw - read words a short at a time
@ -47,6 +48,7 @@ void __raw_writesw(void __iomem *addr, const void *data, int len)
}
EXPORT_SYMBOL(__raw_writesw);
/* Pretty sure len is pre-adjusted for the length of the access already */
void __raw_readsl(const void __iomem *addr, void *data, int len)
@ -62,6 +64,7 @@ void __raw_readsl(const void __iomem *addr, void *data, int len)
}
EXPORT_SYMBOL(__raw_readsl);
void __raw_writesl(void __iomem *addr, const void *data, int len)
{
@ -76,3 +79,4 @@ void __raw_writesl(void __iomem *addr, const void *data, int len)
}
EXPORT_SYMBOL(__raw_writesl);

View File

@ -1145,7 +1145,7 @@ asmlinkage void set_esp0(unsigned long ssp)
*/
asmlinkage void fpsp040_die(void)
{
force_fatal_sig(SIGSEGV);
force_exit_sig(SIGSEGV);
}
#ifdef CONFIG_M68KFPU_EMU

View File

@ -381,6 +381,12 @@ void clk_disable(struct clk *clk)
EXPORT_SYMBOL(clk_disable);
struct clk *clk_get_parent(struct clk *clk)
{
return NULL;
}
EXPORT_SYMBOL(clk_get_parent);
unsigned long clk_get_rate(struct clk *clk)
{
if (!clk)

View File

@ -75,7 +75,7 @@ static unsigned int __init gen_fdt_mem_array(
__init int yamon_dt_append_memory(void *fdt,
const struct yamon_mem_region *regions)
{
unsigned long phys_memsize, memsize;
unsigned long phys_memsize = 0, memsize;
__be32 mem_array[2 * MAX_MEM_ARRAY_ENTRIES];
unsigned int mem_entries;
int i, err, mem_off;

View File

@ -387,3 +387,4 @@
446 n32 landlock_restrict_self sys_landlock_restrict_self
# 447 reserved for memfd_secret
448 n32 process_mrelease sys_process_mrelease
449 n32 futex_waitv sys_futex_waitv

View File

@ -363,3 +363,4 @@
446 n64 landlock_restrict_self sys_landlock_restrict_self
# 447 reserved for memfd_secret
448 n64 process_mrelease sys_process_mrelease
449 n64 futex_waitv sys_futex_waitv

View File

@ -436,3 +436,4 @@
446 o32 landlock_restrict_self sys_landlock_restrict_self
# 447 reserved for memfd_secret
448 o32 process_mrelease sys_process_mrelease
449 o32 futex_waitv sys_futex_waitv

View File

@ -1067,7 +1067,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = 1;
break;
case KVM_CAP_NR_VCPUS:
r = num_online_cpus();
r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS);
break;
case KVM_CAP_MAX_VCPUS:
r = KVM_MAX_VCPUS;

View File

@ -158,6 +158,12 @@ void clk_deactivate(struct clk *clk)
}
EXPORT_SYMBOL(clk_deactivate);
struct clk *clk_get_parent(struct clk *clk)
{
return NULL;
}
EXPORT_SYMBOL(clk_get_parent);
static inline u32 get_counter_resolution(void)
{
u32 res;

View File

@ -231,6 +231,7 @@ CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRC_CCITT=m
CONFIG_CRC_T10DIF=y
CONFIG_FONTS=y
CONFIG_PRINTK_TIME=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_MEMORY_INIT=y

View File

@ -3,38 +3,19 @@
* Copyright (C) 1999 Hewlett-Packard (Frank Rowand)
* Copyright (C) 1999 Philipp Rumpf <prumpf@tux.org>
* Copyright (C) 1999 SuSE GmbH
* Copyright (C) 2021 Helge Deller <deller@gmx.de>
*/
#ifndef _PARISC_ASSEMBLY_H
#define _PARISC_ASSEMBLY_H
#define CALLEE_FLOAT_FRAME_SIZE 80
#ifdef CONFIG_64BIT
#define LDREG ldd
#define STREG std
#define LDREGX ldd,s
#define LDREGM ldd,mb
#define STREGM std,ma
#define SHRREG shrd
#define SHLREG shld
#define ANDCM andcm,*
#define COND(x) * ## x
#define RP_OFFSET 16
#define FRAME_SIZE 128
#define CALLEE_REG_FRAME_SIZE 144
#define REG_SZ 8
#define ASM_ULONG_INSN .dword
#else /* CONFIG_64BIT */
#define LDREG ldw
#define STREG stw
#define LDREGX ldwx,s
#define LDREGM ldwm
#define STREGM stwm
#define SHRREG shr
#define SHLREG shlw
#define ANDCM andcm
#define COND(x) x
#define RP_OFFSET 20
#define FRAME_SIZE 64
#define CALLEE_REG_FRAME_SIZE 128
@ -45,6 +26,7 @@
/* Frame alignment for 32- and 64-bit */
#define FRAME_ALIGN 64
#define CALLEE_FLOAT_FRAME_SIZE 80
#define CALLEE_SAVE_FRAME_SIZE (CALLEE_REG_FRAME_SIZE + CALLEE_FLOAT_FRAME_SIZE)
#ifdef CONFIG_PA20
@ -67,6 +49,28 @@
#ifdef __ASSEMBLY__
#ifdef CONFIG_64BIT
#define LDREG ldd
#define STREG std
#define LDREGX ldd,s
#define LDREGM ldd,mb
#define STREGM std,ma
#define SHRREG shrd
#define SHLREG shld
#define ANDCM andcm,*
#define COND(x) * ## x
#else /* CONFIG_64BIT */
#define LDREG ldw
#define STREG stw
#define LDREGX ldwx,s
#define LDREGM ldwm
#define STREGM stwm
#define SHRREG shr
#define SHLREG shlw
#define ANDCM andcm
#define COND(x) x
#endif
#ifdef CONFIG_64BIT
/* the 64-bit pa gnu assembler unfortunately defaults to .level 1.1 or 2.0 so
* work around that for now... */

View File

@ -5,6 +5,7 @@
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <linux/stringify.h>
#include <asm/assembly.h>
#define JUMP_LABEL_NOP_SIZE 4

View File

@ -2,7 +2,7 @@
#ifndef _ASM_PARISC_RT_SIGFRAME_H
#define _ASM_PARISC_RT_SIGFRAME_H
#define SIGRETURN_TRAMP 3
#define SIGRETURN_TRAMP 4
#define SIGRESTARTBLOCK_TRAMP 5
#define TRAMP_SIZE (SIGRETURN_TRAMP + SIGRESTARTBLOCK_TRAMP)

View File

@ -288,21 +288,22 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs,
already in userspace. The first words of tramp are used to
save the previous sigrestartblock trampoline that might be
on the stack. We start the sigreturn trampoline at
SIGRESTARTBLOCK_TRAMP. */
SIGRESTARTBLOCK_TRAMP+X. */
err |= __put_user(in_syscall ? INSN_LDI_R25_1 : INSN_LDI_R25_0,
&frame->tramp[SIGRESTARTBLOCK_TRAMP+0]);
err |= __put_user(INSN_BLE_SR2_R0,
err |= __put_user(INSN_LDI_R20,
&frame->tramp[SIGRESTARTBLOCK_TRAMP+1]);
err |= __put_user(INSN_LDI_R20,
err |= __put_user(INSN_BLE_SR2_R0,
&frame->tramp[SIGRESTARTBLOCK_TRAMP+2]);
err |= __put_user(INSN_NOP, &frame->tramp[SIGRESTARTBLOCK_TRAMP+3]);
start = (unsigned long) &frame->tramp[SIGRESTARTBLOCK_TRAMP+0];
end = (unsigned long) &frame->tramp[SIGRESTARTBLOCK_TRAMP+3];
start = (unsigned long) &frame->tramp[0];
end = (unsigned long) &frame->tramp[TRAMP_SIZE];
flush_user_dcache_range_asm(start, end);
flush_user_icache_range_asm(start, end);
/* TRAMP Words 0-4, Length 5 = SIGRESTARTBLOCK_TRAMP
* TRAMP Words 5-7, Length 3 = SIGRETURN_TRAMP
* TRAMP Words 5-9, Length 4 = SIGRETURN_TRAMP
* So the SIGRETURN_TRAMP is at the end of SIGRESTARTBLOCK_TRAMP
*/
rp = (unsigned long) &frame->tramp[SIGRESTARTBLOCK_TRAMP];

View File

@ -36,7 +36,7 @@ struct compat_regfile {
compat_int_t rf_sar;
};
#define COMPAT_SIGRETURN_TRAMP 3
#define COMPAT_SIGRETURN_TRAMP 4
#define COMPAT_SIGRESTARTBLOCK_TRAMP 5
#define COMPAT_TRAMP_SIZE (COMPAT_SIGRETURN_TRAMP + \
COMPAT_SIGRESTARTBLOCK_TRAMP)

View File

@ -446,3 +446,4 @@
446 common landlock_restrict_self sys_landlock_restrict_self
# 447 reserved for memfd_secret
448 common process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv

View File

@ -196,3 +196,6 @@ clean-files := vmlinux.lds
# Force dependency (incbin is bad)
$(obj)/vdso32_wrapper.o : $(obj)/vdso32/vdso32.so.dbg
$(obj)/vdso64_wrapper.o : $(obj)/vdso64/vdso64.so.dbg
# for cleaning
subdir- += vdso32 vdso64

View File

@ -733,6 +733,7 @@ _GLOBAL(mmu_pin_tlb)
#ifdef CONFIG_PIN_TLB_DATA
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)
LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)
li r8, 0
#ifdef CONFIG_PIN_TLB_IMMR
li r0, 3
#else
@ -741,26 +742,26 @@ _GLOBAL(mmu_pin_tlb)
mtctr r0
cmpwi r4, 0
beq 4f
LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT)
LOAD_REG_ADDR(r9, _sinittext)
2: ori r0, r6, MD_EVALID
ori r12, r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT
mtspr SPRN_MD_CTR, r5
mtspr SPRN_MD_EPN, r0
mtspr SPRN_MD_TWC, r7
mtspr SPRN_MD_RPN, r8
mtspr SPRN_MD_RPN, r12
addi r5, r5, 0x100
addis r6, r6, SZ_8M@h
addis r8, r8, SZ_8M@h
cmplw r6, r9
bdnzt lt, 2b
4: LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_DIRTY | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT)
4:
2: ori r0, r6, MD_EVALID
ori r12, r8, 0xf0 | _PAGE_DIRTY | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT
mtspr SPRN_MD_CTR, r5
mtspr SPRN_MD_EPN, r0
mtspr SPRN_MD_TWC, r7
mtspr SPRN_MD_RPN, r8
mtspr SPRN_MD_RPN, r12
addi r5, r5, 0x100
addis r6, r6, SZ_8M@h
addis r8, r8, SZ_8M@h
@ -781,7 +782,7 @@ _GLOBAL(mmu_pin_tlb)
#endif
#if defined(CONFIG_PIN_TLB_IMMR) || defined(CONFIG_PIN_TLB_DATA)
lis r0, (MD_RSV4I | MD_TWAM)@h
mtspr SPRN_MI_CTR, r0
mtspr SPRN_MD_CTR, r0
#endif
mtspr SPRN_SRR1, r10
mtspr SPRN_SRR0, r11

View File

@ -25,8 +25,14 @@ static inline int __get_user_sigset(sigset_t *dst, const sigset_t __user *src)
return __get_user(dst->sig[0], (u64 __user *)&src->sig[0]);
}
#define unsafe_get_user_sigset(dst, src, label) \
unsafe_get_user((dst)->sig[0], (u64 __user *)&(src)->sig[0], label)
#define unsafe_get_user_sigset(dst, src, label) do { \
sigset_t *__dst = dst; \
const sigset_t __user *__src = src; \
int i; \
\
for (i = 0; i < _NSIG_WORDS; i++) \
unsafe_get_user(__dst->sig[i], &__src->sig[i], label); \
} while (0)
#ifdef CONFIG_VSX
extern unsigned long copy_vsx_to_user(void __user *to,

View File

@ -1063,7 +1063,7 @@ SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx,
* We kill the task with a SIGSEGV in this situation.
*/
if (do_setcontext(new_ctx, regs, 0)) {
force_fatal_sig(SIGSEGV);
force_exit_sig(SIGSEGV);
return -EFAULT;
}

View File

@ -704,7 +704,7 @@ SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx,
*/
if (__get_user_sigset(&set, &new_ctx->uc_sigmask)) {
force_fatal_sig(SIGSEGV);
force_exit_sig(SIGSEGV);
return -EFAULT;
}
set_current_blocked(&set);
@ -713,7 +713,7 @@ SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx,
return -EFAULT;
if (__unsafe_restore_sigcontext(current, NULL, 0, &new_ctx->uc_mcontext)) {
user_read_access_end();
force_fatal_sig(SIGSEGV);
force_exit_sig(SIGSEGV);
return -EFAULT;
}
user_read_access_end();

View File

@ -187,6 +187,12 @@ static void watchdog_smp_panic(int cpu, u64 tb)
if (sysctl_hardlockup_all_cpu_backtrace)
trigger_allbutself_cpu_backtrace();
/*
* Force flush any remote buffers that might be stuck in IRQ context
* and therefore could not run their irq_work.
*/
printk_trigger_flush();
if (hardlockup_panic)
nmi_panic(NULL, "Hard LOCKUP");

View File

@ -2005,7 +2005,7 @@ hcall_real_table:
.globl hcall_real_table_end
hcall_real_table_end:
_GLOBAL(kvmppc_h_set_xdabr)
_GLOBAL_TOC(kvmppc_h_set_xdabr)
EXPORT_SYMBOL_GPL(kvmppc_h_set_xdabr)
andi. r0, r5, DABRX_USER | DABRX_KERNEL
beq 6f
@ -2015,7 +2015,7 @@ EXPORT_SYMBOL_GPL(kvmppc_h_set_xdabr)
6: li r3, H_PARAMETER
blr
_GLOBAL(kvmppc_h_set_dabr)
_GLOBAL_TOC(kvmppc_h_set_dabr)
EXPORT_SYMBOL_GPL(kvmppc_h_set_dabr)
li r5, DABRX_USER | DABRX_KERNEL
3:

View File

@ -641,9 +641,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
* implementations just count online CPUs.
*/
if (hv_enabled)
r = num_present_cpus();
r = min_t(unsigned int, num_present_cpus(), KVM_MAX_VCPUS);
else
r = num_online_cpus();
r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS);
break;
case KVM_CAP_MAX_VCPUS:
r = KVM_MAX_VCPUS;

View File

@ -314,7 +314,7 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
pr_warn("KASLR: No safe seed for randomizing the kernel base.\n");
ram = min_t(phys_addr_t, __max_low_memory, size);
ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true, false);
ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true, true);
linear_sz = min_t(unsigned long, ram, SZ_512M);
/* If the linear size is smaller than 64M, do not randmize */

View File

@ -645,7 +645,7 @@ static void early_init_this_mmu(void)
if (map)
linear_map_top = map_mem_in_cams(linear_map_top,
num_cams, true, true);
num_cams, false, true);
}
#endif
@ -766,7 +766,7 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
num_cams = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) / 4;
linear_sz = map_mem_in_cams(first_memblock_size, num_cams,
false, true);
true, true);
ppc64_rma_size = min_t(u64, linear_sz, 0x40000000);
} else

View File

@ -376,9 +376,9 @@ static void initialize_form2_numa_distance_lookup_table(void)
{
int i, j;
struct device_node *root;
const __u8 *numa_dist_table;
const __u8 *form2_distances;
const __be32 *numa_lookup_index;
int numa_dist_table_length;
int form2_distances_length;
int max_numa_index, distance_index;
if (firmware_has_feature(FW_FEATURE_OPAL))
@ -392,45 +392,41 @@ static void initialize_form2_numa_distance_lookup_table(void)
max_numa_index = of_read_number(&numa_lookup_index[0], 1);
/* first element of the array is the size and is encode-int */
numa_dist_table = of_get_property(root, "ibm,numa-distance-table", NULL);
numa_dist_table_length = of_read_number((const __be32 *)&numa_dist_table[0], 1);
form2_distances = of_get_property(root, "ibm,numa-distance-table", NULL);
form2_distances_length = of_read_number((const __be32 *)&form2_distances[0], 1);
/* Skip the size which is encoded int */
numa_dist_table += sizeof(__be32);
form2_distances += sizeof(__be32);
pr_debug("numa_dist_table_len = %d, numa_dist_indexes_len = %d\n",
numa_dist_table_length, max_numa_index);
pr_debug("form2_distances_len = %d, numa_dist_indexes_len = %d\n",
form2_distances_length, max_numa_index);
for (i = 0; i < max_numa_index; i++)
/* +1 skip the max_numa_index in the property */
numa_id_index_table[i] = of_read_number(&numa_lookup_index[i + 1], 1);
if (numa_dist_table_length != max_numa_index * max_numa_index) {
if (form2_distances_length != max_numa_index * max_numa_index) {
WARN(1, "Wrong NUMA distance information\n");
/* consider everybody else just remote. */
for (i = 0; i < max_numa_index; i++) {
for (j = 0; j < max_numa_index; j++) {
int nodeA = numa_id_index_table[i];
int nodeB = numa_id_index_table[j];
if (nodeA == nodeB)
numa_distance_table[nodeA][nodeB] = LOCAL_DISTANCE;
else
numa_distance_table[nodeA][nodeB] = REMOTE_DISTANCE;
}
}
form2_distances = NULL; // don't use it
}
distance_index = 0;
for (i = 0; i < max_numa_index; i++) {
for (j = 0; j < max_numa_index; j++) {
int nodeA = numa_id_index_table[i];
int nodeB = numa_id_index_table[j];
int dist;
numa_distance_table[nodeA][nodeB] = numa_dist_table[distance_index++];
pr_debug("dist[%d][%d]=%d ", nodeA, nodeB, numa_distance_table[nodeA][nodeB]);
if (form2_distances)
dist = form2_distances[distance_index++];
else if (nodeA == nodeB)
dist = LOCAL_DISTANCE;
else
dist = REMOTE_DISTANCE;
numa_distance_table[nodeA][nodeB] = dist;
pr_debug("dist[%d][%d]=%d ", nodeA, nodeB, dist);
}
}
of_node_put(root);
}

View File

@ -186,7 +186,6 @@ err:
static int mcu_remove(struct i2c_client *client)
{
struct mcu *mcu = i2c_get_clientdata(client);
int ret;
kthread_stop(shutdown_thread);

View File

@ -1094,15 +1094,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
phys_addr_t max_addr = memory_hotplug_max();
struct device_node *memory;
/*
* The "ibm,pmemory" can appear anywhere in the address space.
* Assuming it is still backed by page structs, set the upper limit
* for the huge DMA window as MAX_PHYSMEM_BITS.
*/
if (of_find_node_by_type(NULL, "ibm,pmemory"))
return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
for_each_node_by_type(memory, "memory") {
unsigned long start, size;
int n_mem_addr_cells, n_mem_size_cells, len;
@ -1238,7 +1229,6 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
u32 ddw_avail[DDW_APPLICABLE_SIZE];
struct dma_win *window;
struct property *win64;
bool ddw_enabled = false;
struct failed_ddw_pdn *fpdn;
bool default_win_removed = false, direct_mapping = false;
bool pmem_present;
@ -1253,7 +1243,6 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
if (find_existing_ddw(pdn, &dev->dev.archdata.dma_offset, &len)) {
direct_mapping = (len >= max_ram_len);
ddw_enabled = true;
goto out_unlock;
}
@ -1367,8 +1356,10 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
len = order_base_2(query.largest_available_block << page_shift);
win_name = DMA64_PROPNAME;
} else {
direct_mapping = true;
win_name = DIRECT64_PROPNAME;
direct_mapping = !default_win_removed ||
(len == MAX_PHYSMEM_BITS) ||
(!pmem_present && (len == max_ram_len));
win_name = direct_mapping ? DIRECT64_PROPNAME : DMA64_PROPNAME;
}
ret = create_ddw(dev, ddw_avail, &create, page_shift, len);
@ -1406,8 +1397,8 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
dev_info(&dev->dev, "failed to map DMA window for %pOF: %d\n",
dn, ret);
/* Make sure to clean DDW if any TCE was set*/
clean_dma_window(pdn, win64->value);
/* Make sure to clean DDW if any TCE was set*/
clean_dma_window(pdn, win64->value);
goto out_del_list;
}
} else {
@ -1454,7 +1445,6 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
spin_unlock(&dma_win_list_lock);
dev->dev.archdata.dma_offset = win_addr;
ddw_enabled = true;
goto out_unlock;
out_del_list:
@ -1490,10 +1480,10 @@ out_unlock:
* as RAM, then we failed to create a window to cover persistent
* memory and need to set the DMA limit.
*/
if (pmem_present && ddw_enabled && direct_mapping && len == max_ram_len)
if (pmem_present && direct_mapping && len == max_ram_len)
dev->dev.bus_dma_limit = dev->dev.archdata.dma_offset + (1ULL << len);
return ddw_enabled && direct_mapping;
return direct_mapping;
}
static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)

View File

@ -3,7 +3,6 @@ config PPC_XIVE
bool
select PPC_SMP_MUXED_IPI
select HARDIRQS_SW_RESEND
select IRQ_DOMAIN_NOMAP
config PPC_XIVE_NATIVE
bool

View File

@ -1443,8 +1443,7 @@ static const struct irq_domain_ops xive_irq_domain_ops = {
static void __init xive_init_host(struct device_node *np)
{
xive_irq_domain = irq_domain_add_nomap(np, XIVE_MAX_IRQ,
&xive_irq_domain_ops, NULL);
xive_irq_domain = irq_domain_add_tree(np, &xive_irq_domain_ops, NULL);
if (WARN_ON(xive_irq_domain == NULL))
return;
irq_set_default_host(xive_irq_domain);

View File

@ -107,11 +107,13 @@ PHONY += vdso_install
vdso_install:
$(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@
ifeq ($(KBUILD_EXTMOD),)
ifeq ($(CONFIG_MMU),y)
prepare: vdso_prepare
vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso include/generated/vdso-offsets.h
endif
endif
ifneq ($(CONFIG_XIP_KERNEL),y)
ifeq ($(CONFIG_RISCV_M_MODE)$(CONFIG_SOC_CANAAN),yy)

View File

@ -19,6 +19,8 @@ CONFIG_SOC_VIRT=y
CONFIG_SOC_MICROCHIP_POLARFIRE=y
CONFIG_SMP=y
CONFIG_HOTPLUG_CPU=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -19,6 +19,8 @@ CONFIG_SOC_VIRT=y
CONFIG_ARCH_RV32I=y
CONFIG_SMP=y
CONFIG_HOTPLUG_CPU=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -740,7 +740,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
* Ensure we set mode to IN_GUEST_MODE after we disable
* interrupts and before the final VCPU requests check.
* See the comment in kvm_vcpu_exiting_guest_mode() and
* Documentation/virtual/kvm/vcpu-requests.rst
* Documentation/virt/kvm/vcpu-requests.rst
*/
vcpu->mode = IN_GUEST_MODE;

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* Copyright (c) 2019 Western Digital Corporation or its affiliates.
*
* Authors:

View File

@ -74,7 +74,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = 1;
break;
case KVM_CAP_NR_VCPUS:
r = num_online_cpus();
r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS);
break;
case KVM_CAP_MAX_VCPUS:
r = KVM_MAX_VCPUS;

View File

@ -47,7 +47,7 @@ config ARCH_SUPPORTS_UPROBES
config KASAN_SHADOW_OFFSET
hex
depends on KASAN
default 0x18000000000000
default 0x1C000000000000
config S390
def_bool y
@ -194,6 +194,7 @@ config S390
select HAVE_RELIABLE_STACKTRACE
select HAVE_RSEQ
select HAVE_SAMPLE_FTRACE_DIRECT
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI
select HAVE_SOFTIRQ_ON_OWN_STACK
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING

View File

@ -77,10 +77,12 @@ KBUILD_AFLAGS_DECOMPRESSOR += $(aflags-y)
KBUILD_CFLAGS_DECOMPRESSOR += $(cflags-y)
ifneq ($(call cc-option,-mstack-size=8192 -mstack-guard=128),)
cflags-$(CONFIG_CHECK_STACK) += -mstack-size=$(STACK_SIZE)
ifeq ($(call cc-option,-mstack-size=8192),)
cflags-$(CONFIG_CHECK_STACK) += -mstack-guard=$(CONFIG_STACK_GUARD)
endif
CC_FLAGS_CHECK_STACK := -mstack-size=$(STACK_SIZE)
ifeq ($(call cc-option,-mstack-size=8192),)
CC_FLAGS_CHECK_STACK += -mstack-guard=$(CONFIG_STACK_GUARD)
endif
export CC_FLAGS_CHECK_STACK
cflags-$(CONFIG_CHECK_STACK) += $(CC_FLAGS_CHECK_STACK)
endif
ifdef CONFIG_EXPOLINE

View File

@ -149,82 +149,56 @@ static void setup_ident_map_size(unsigned long max_physmem_end)
static void setup_kernel_memory_layout(void)
{
bool vmalloc_size_verified = false;
unsigned long vmemmap_off;
unsigned long vspace_left;
unsigned long vmemmap_start;
unsigned long rte_size;
unsigned long pages;
unsigned long vmax;
pages = ident_map_size / PAGE_SIZE;
/* vmemmap contains a multiple of PAGES_PER_SECTION struct pages */
vmemmap_size = SECTION_ALIGN_UP(pages) * sizeof(struct page);
/* choose kernel address space layout: 4 or 3 levels. */
vmemmap_off = round_up(ident_map_size, _REGION3_SIZE);
vmemmap_start = round_up(ident_map_size, _REGION3_SIZE);
if (IS_ENABLED(CONFIG_KASAN) ||
vmalloc_size > _REGION2_SIZE ||
vmemmap_off + vmemmap_size + vmalloc_size + MODULES_LEN > _REGION2_SIZE)
vmax = _REGION1_SIZE;
else
vmax = _REGION2_SIZE;
/* keep vmemmap_off aligned to a top level region table entry */
rte_size = vmax == _REGION1_SIZE ? _REGION2_SIZE : _REGION3_SIZE;
MODULES_END = vmax;
if (is_prot_virt_host()) {
/*
* forcing modules and vmalloc area under the ultravisor
* secure storage limit, so that any vmalloc allocation
* we do could be used to back secure guest storage.
*/
adjust_to_uv_max(&MODULES_END);
}
#ifdef CONFIG_KASAN
if (MODULES_END < vmax) {
/* force vmalloc and modules below kasan shadow */
MODULES_END = min(MODULES_END, KASAN_SHADOW_START);
vmemmap_start + vmemmap_size + vmalloc_size + MODULES_LEN >
_REGION2_SIZE) {
MODULES_END = _REGION1_SIZE;
rte_size = _REGION2_SIZE;
} else {
/*
* leave vmalloc and modules above kasan shadow but make
* sure they don't overlap with it
*/
vmalloc_size = min(vmalloc_size, vmax - KASAN_SHADOW_END - MODULES_LEN);
vmalloc_size_verified = true;
vspace_left = KASAN_SHADOW_START;
MODULES_END = _REGION2_SIZE;
rte_size = _REGION3_SIZE;
}
/*
* forcing modules and vmalloc area under the ultravisor
* secure storage limit, so that any vmalloc allocation
* we do could be used to back secure guest storage.
*/
adjust_to_uv_max(&MODULES_END);
#ifdef CONFIG_KASAN
/* force vmalloc and modules below kasan shadow */
MODULES_END = min(MODULES_END, KASAN_SHADOW_START);
#endif
MODULES_VADDR = MODULES_END - MODULES_LEN;
VMALLOC_END = MODULES_VADDR;
if (vmalloc_size_verified) {
VMALLOC_START = VMALLOC_END - vmalloc_size;
} else {
vmemmap_off = round_up(ident_map_size, rte_size);
/* allow vmalloc area to occupy up to about 1/2 of the rest virtual space left */
vmalloc_size = min(vmalloc_size, round_down(VMALLOC_END / 2, _REGION3_SIZE));
VMALLOC_START = VMALLOC_END - vmalloc_size;
if (vmemmap_off + vmemmap_size > VMALLOC_END ||
vmalloc_size > VMALLOC_END - vmemmap_off - vmemmap_size) {
/*
* allow vmalloc area to occupy up to 1/2 of
* the rest virtual space left.
*/
vmalloc_size = min(vmalloc_size, VMALLOC_END / 2);
}
VMALLOC_START = VMALLOC_END - vmalloc_size;
vspace_left = VMALLOC_START;
}
pages = vspace_left / (PAGE_SIZE + sizeof(struct page));
/* split remaining virtual space between 1:1 mapping & vmemmap array */
pages = VMALLOC_START / (PAGE_SIZE + sizeof(struct page));
pages = SECTION_ALIGN_UP(pages);
vmemmap_off = round_up(vspace_left - pages * sizeof(struct page), rte_size);
/* keep vmemmap left most starting from a fresh region table entry */
vmemmap_off = min(vmemmap_off, round_up(ident_map_size, rte_size));
/* take care that identity map is lower then vmemmap */
ident_map_size = min(ident_map_size, vmemmap_off);
/* keep vmemmap_start aligned to a top level region table entry */
vmemmap_start = round_down(VMALLOC_START - pages * sizeof(struct page), rte_size);
/* vmemmap_start is the future VMEM_MAX_PHYS, make sure it is within MAX_PHYSMEM */
vmemmap_start = min(vmemmap_start, 1UL << MAX_PHYSMEM_BITS);
/* make sure identity map doesn't overlay with vmemmap */
ident_map_size = min(ident_map_size, vmemmap_start);
vmemmap_size = SECTION_ALIGN_UP(ident_map_size / PAGE_SIZE) * sizeof(struct page);
VMALLOC_START = max(vmemmap_off + vmemmap_size, VMALLOC_START);
vmemmap = (struct page *)vmemmap_off;
/* make sure vmemmap doesn't overlay with vmalloc area */
VMALLOC_START = max(vmemmap_start + vmemmap_size, VMALLOC_START);
vmemmap = (struct page *)vmemmap_start;
}
/*

View File

@ -74,6 +74,12 @@ void *kexec_file_add_components(struct kimage *image,
int arch_kexec_do_relocs(int r_type, void *loc, unsigned long val,
unsigned long addr);
#define ARCH_HAS_KIMAGE_ARCH
struct kimage_arch {
void *ipl_buf;
};
extern const struct kexec_file_ops s390_kexec_image_ops;
extern const struct kexec_file_ops s390_kexec_elf_ops;

View File

@ -191,8 +191,8 @@ static int copy_oldmem_user(void __user *dst, void *src, size_t count)
return rc;
} else {
/* Check for swapped kdump oldmem areas */
if (oldmem_data.start && from - oldmem_data.size < oldmem_data.size) {
from -= oldmem_data.size;
if (oldmem_data.start && from - oldmem_data.start < oldmem_data.size) {
from -= oldmem_data.start;
len = min(count, oldmem_data.size - from);
} else if (oldmem_data.start && from < oldmem_data.size) {
len = min(count, oldmem_data.size - from);

View File

@ -2156,7 +2156,7 @@ void *ipl_report_finish(struct ipl_report *report)
buf = vzalloc(report->size);
if (!buf)
return ERR_PTR(-ENOMEM);
goto out;
ptr = buf;
memcpy(ptr, report->ipib, report->ipib->hdr.len);
@ -2195,6 +2195,7 @@ void *ipl_report_finish(struct ipl_report *report)
}
BUG_ON(ptr > buf + report->size);
out:
return buf;
}

View File

@ -12,6 +12,7 @@
#include <linux/kexec.h>
#include <linux/module_signature.h>
#include <linux/verification.h>
#include <linux/vmalloc.h>
#include <asm/boot_data.h>
#include <asm/ipl.h>
#include <asm/setup.h>
@ -170,6 +171,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
struct kexec_buf buf;
unsigned long addr;
void *ptr, *end;
int ret;
buf.image = image;
@ -199,9 +201,13 @@ static int kexec_file_add_ipl_report(struct kimage *image,
ptr += len;
}
ret = -ENOMEM;
buf.buffer = ipl_report_finish(data->report);
if (!buf.buffer)
goto out;
buf.bufsz = data->report->size;
buf.memsz = buf.bufsz;
image->arch.ipl_buf = buf.buffer;
data->memsz += buf.memsz;
@ -209,7 +215,9 @@ static int kexec_file_add_ipl_report(struct kimage *image,
data->kernel_buf + offsetof(struct lowcore, ipl_parmblock_ptr);
*lc_ipl_parmblock_ptr = (__u32)buf.mem;
return kexec_add_buffer(&buf);
ret = kexec_add_buffer(&buf);
out:
return ret;
}
void *kexec_file_add_components(struct kimage *image,
@ -322,3 +330,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
}
return 0;
}
int arch_kimage_file_post_load_cleanup(struct kimage *image)
{
vfree(image->arch.ipl_buf);
image->arch.ipl_buf = NULL;
return kexec_image_post_load_cleanup_default(image);
}

View File

@ -606,7 +606,7 @@ static void __init setup_resources(void)
static void __init setup_memory_end(void)
{
memblock_remove(ident_map_size, ULONG_MAX);
memblock_remove(ident_map_size, PHYS_ADDR_MAX - ident_map_size);
max_pfn = max_low_pfn = PFN_DOWN(ident_map_size);
pr_notice("The maximum memory size is %luMB\n", ident_map_size >> 20);
}
@ -637,14 +637,6 @@ static struct notifier_block kdump_mem_nb = {
#endif
/*
* Make sure that the area above identity mapping is protected
*/
static void __init reserve_above_ident_map(void)
{
memblock_reserve(ident_map_size, ULONG_MAX);
}
/*
* Reserve memory for kdump kernel to be loaded with kexec
*/
@ -785,7 +777,6 @@ static void __init memblock_add_mem_detect_info(void)
}
memblock_set_bottom_up(false);
memblock_set_node(0, ULONG_MAX, &memblock.memory, 0);
memblock_dump_all();
}
/*
@ -826,9 +817,6 @@ static void __init setup_memory(void)
storage_key_init_range(start, end);
psw_set_key(PAGE_DEFAULT_KEY);
/* Only cosmetics */
memblock_enforce_memory_limit(memblock_end_of_DRAM());
}
static void __init relocate_amode31_section(void)
@ -999,24 +987,24 @@ void __init setup_arch(char **cmdline_p)
setup_control_program_code();
/* Do some memory reservations *before* memory is added to memblock */
reserve_above_ident_map();
reserve_kernel();
reserve_initrd();
reserve_certificate_list();
reserve_mem_detect_info();
memblock_set_current_limit(ident_map_size);
memblock_allow_resize();
/* Get information about *all* installed memory */
memblock_add_mem_detect_info();
free_mem_detect_info();
setup_memory_end();
memblock_dump_all();
setup_memory();
relocate_amode31_section();
setup_cr();
setup_uv();
setup_memory_end();
setup_memory();
dma_contiguous_reserve(ident_map_size);
vmcp_cma_reserve();
if (MACHINE_HAS_EDAT2)

View File

@ -451,3 +451,4 @@
446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self
# 447 reserved for memfd_secret
448 common process_mrelease sys_process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv sys_futex_waitv

View File

@ -84,7 +84,7 @@ static void default_trap_handler(struct pt_regs *regs)
{
if (user_mode(regs)) {
report_user_fault(regs, SIGSEGV, 0);
force_fatal_sig(SIGSEGV);
force_exit_sig(SIGSEGV);
} else
die(regs, "Unknown program exception");
}

View File

@ -22,7 +22,7 @@ KBUILD_AFLAGS_32 += -m31 -s
KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
KBUILD_CFLAGS_32 += -m31 -fPIC -shared -fno-common -fno-builtin
LDFLAGS_vdso32.so.dbg += -fPIC -shared -nostdlib -soname=linux-vdso32.so.1 \
LDFLAGS_vdso32.so.dbg += -fPIC -shared -soname=linux-vdso32.so.1 \
--hash-style=both --build-id=sha1 -melf_s390 -T
$(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)

View File

@ -8,8 +8,9 @@ ARCH_REL_TYPE_ABS += R_390_GOT|R_390_PLT
include $(srctree)/lib/vdso/Makefile
obj-vdso64 = vdso_user_wrapper.o note.o
obj-cvdso64 = vdso64_generic.o getcpu.o
CFLAGS_REMOVE_getcpu.o = -pg $(CC_FLAGS_FTRACE) $(CC_FLAGS_EXPOLINE)
CFLAGS_REMOVE_vdso64_generic.o = -pg $(CC_FLAGS_FTRACE) $(CC_FLAGS_EXPOLINE)
VDSO_CFLAGS_REMOVE := -pg $(CC_FLAGS_FTRACE) $(CC_FLAGS_EXPOLINE) $(CC_FLAGS_CHECK_STACK)
CFLAGS_REMOVE_getcpu.o = $(VDSO_CFLAGS_REMOVE)
CFLAGS_REMOVE_vdso64_generic.o = $(VDSO_CFLAGS_REMOVE)
# Build rules
@ -25,7 +26,7 @@ KBUILD_AFLAGS_64 += -m64 -s
KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS))
KBUILD_CFLAGS_64 += -m64 -fPIC -shared -fno-common -fno-builtin
ldflags-y := -fPIC -shared -nostdlib -soname=linux-vdso64.so.1 \
ldflags-y := -fPIC -shared -soname=linux-vdso64.so.1 \
--hash-style=both --build-id=sha1 -T
$(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_64)

View File

@ -585,6 +585,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = KVM_MAX_VCPUS;
else if (sclp.has_esca && sclp.has_64bscao)
r = KVM_S390_ESCA_CPU_SLOTS;
if (ext == KVM_CAP_NR_VCPUS)
r = min_t(unsigned int, num_online_cpus(), r);
break;
case KVM_CAP_S390_COW:
r = MACHINE_HAS_ESOP;

View File

@ -244,7 +244,7 @@ static int setup_frame(struct ksignal *ksig, struct pt_regs *regs,
get_sigframe(ksig, regs, sigframe_size);
if (invalid_frame_pointer(sf, sigframe_size)) {
force_fatal_sig(SIGILL);
force_exit_sig(SIGILL);
return -EINVAL;
}
@ -336,7 +336,7 @@ static int setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs,
sf = (struct rt_signal_frame __user *)
get_sigframe(ksig, regs, sigframe_size);
if (invalid_frame_pointer(sf, sigframe_size)) {
force_fatal_sig(SIGILL);
force_exit_sig(SIGILL);
return -EINVAL;
}

View File

@ -122,7 +122,7 @@ void try_to_clear_window_buffer(struct pt_regs *regs, int who)
if ((sp & 7) ||
copy_to_user((char __user *) sp, &tp->reg_window[window],
sizeof(struct reg_window32))) {
force_fatal_sig(SIGILL);
force_exit_sig(SIGILL);
return;
}
}

View File

@ -193,7 +193,7 @@ config X86
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_SAMPLE_FTRACE_DIRECT if X86_64
select HAVE_SAMPLE_FTRACE_MULTI_DIRECT if X86_64
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_EISA

View File

@ -226,7 +226,7 @@ bool emulate_vsyscall(unsigned long error_code,
if ((!tmp && regs->orig_ax != syscall_nr) || regs->ip != address) {
warn_bad_vsyscall(KERN_DEBUG, regs,
"seccomp tried to change syscall nr or ip");
force_fatal_sig(SIGSYS);
force_exit_sig(SIGSYS);
return true;
}
regs->orig_ax = -1;

View File

@ -2211,7 +2211,6 @@ intel_pmu_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int
/* must not have branches... */
local_irq_save(flags);
__intel_pmu_disable_all(false); /* we don't care about BTS */
__intel_pmu_pebs_disable_all();
__intel_pmu_lbr_disable();
/* ... until here */
return __intel_pmu_snapshot_branch_stack(entries, cnt, flags);
@ -2225,7 +2224,6 @@ intel_pmu_snapshot_arch_branch_stack(struct perf_branch_entry *entries, unsigned
/* must not have branches... */
local_irq_save(flags);
__intel_pmu_disable_all(false); /* we don't care about BTS */
__intel_pmu_pebs_disable_all();
__intel_pmu_arch_lbr_disable();
/* ... until here */
return __intel_pmu_snapshot_branch_stack(entries, cnt, flags);

View File

@ -3608,6 +3608,9 @@ static int skx_cha_hw_config(struct intel_uncore_box *box, struct perf_event *ev
struct hw_perf_event_extra *reg1 = &event->hw.extra_reg;
struct extra_reg *er;
int idx = 0;
/* Any of the CHA events may be filtered by Thread/Core-ID.*/
if (event->hw.config & SNBEP_CBO_PMON_CTL_TID_EN)
idx = SKX_CHA_MSR_PMON_BOX_FILTER_TID;
for (er = skx_uncore_cha_extra_regs; er->msr; er++) {
if (er->event != (event->hw.config & er->config_mask))
@ -3675,6 +3678,7 @@ static struct event_constraint skx_uncore_iio_constraints[] = {
UNCORE_EVENT_CONSTRAINT(0xc0, 0xc),
UNCORE_EVENT_CONSTRAINT(0xc5, 0xc),
UNCORE_EVENT_CONSTRAINT(0xd4, 0xc),
UNCORE_EVENT_CONSTRAINT(0xd5, 0xc),
EVENT_CONSTRAINT_END
};
@ -4525,6 +4529,13 @@ static void snr_iio_cleanup_mapping(struct intel_uncore_type *type)
pmu_iio_cleanup_mapping(type, &snr_iio_mapping_group);
}
static struct event_constraint snr_uncore_iio_constraints[] = {
UNCORE_EVENT_CONSTRAINT(0x83, 0x3),
UNCORE_EVENT_CONSTRAINT(0xc0, 0xc),
UNCORE_EVENT_CONSTRAINT(0xd5, 0xc),
EVENT_CONSTRAINT_END
};
static struct intel_uncore_type snr_uncore_iio = {
.name = "iio",
.num_counters = 4,
@ -4536,6 +4547,7 @@ static struct intel_uncore_type snr_uncore_iio = {
.event_mask_ext = SNR_IIO_PMON_RAW_EVENT_MASK_EXT,
.box_ctl = SNR_IIO_MSR_PMON_BOX_CTL,
.msr_offset = SNR_IIO_MSR_OFFSET,
.constraints = snr_uncore_iio_constraints,
.ops = &ivbep_uncore_msr_ops,
.format_group = &snr_uncore_iio_format_group,
.attr_update = snr_iio_attr_update,

View File

@ -177,6 +177,9 @@ void set_hv_tscchange_cb(void (*cb)(void))
return;
}
if (!hv_vp_index)
return;
hv_reenlightenment_cb = cb;
/* Make sure callback is registered before we write to MSRs */
@ -383,20 +386,13 @@ static void __init hv_get_partition_id(void)
*/
void __init hyperv_init(void)
{
u64 guest_id, required_msrs;
u64 guest_id;
union hv_x64_msr_hypercall_contents hypercall_msr;
int cpuhp;
if (x86_hyper_type != X86_HYPER_MS_HYPERV)
return;
/* Absolutely required MSRs */
required_msrs = HV_MSR_HYPERCALL_AVAILABLE |
HV_MSR_VP_INDEX_AVAILABLE;
if ((ms_hyperv.features & required_msrs) != required_msrs)
return;
if (hv_common_init())
return;

View File

@ -363,6 +363,7 @@ union kvm_mmu_extended_role {
unsigned int cr4_smap:1;
unsigned int cr4_smep:1;
unsigned int cr4_la57:1;
unsigned int efer_lma:1;
};
};

View File

@ -163,12 +163,22 @@ static uint32_t __init ms_hyperv_platform(void)
cpuid(HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS,
&eax, &hyp_signature[0], &hyp_signature[1], &hyp_signature[2]);
if (eax >= HYPERV_CPUID_MIN &&
eax <= HYPERV_CPUID_MAX &&
!memcmp("Microsoft Hv", hyp_signature, 12))
return HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS;
if (eax < HYPERV_CPUID_MIN || eax > HYPERV_CPUID_MAX ||
memcmp("Microsoft Hv", hyp_signature, 12))
return 0;
return 0;
/* HYPERCALL and VP_INDEX MSRs are mandatory for all features. */
eax = cpuid_eax(HYPERV_CPUID_FEATURES);
if (!(eax & HV_MSR_HYPERCALL_AVAILABLE)) {
pr_warn("x86/hyperv: HYPERCALL MSR not available.\n");
return 0;
}
if (!(eax & HV_MSR_VP_INDEX_AVAILABLE)) {
pr_warn("x86/hyperv: VP_INDEX MSR not available.\n");
return 0;
}
return HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS;
}
static unsigned char hv_get_nmi_reason(void)

View File

@ -28,8 +28,7 @@ static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq);
static LIST_HEAD(sgx_active_page_list);
static DEFINE_SPINLOCK(sgx_reclaimer_lock);
/* The free page list lock protected variables prepend the lock. */
static unsigned long sgx_nr_free_pages;
static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0);
/* Nodes with one or more EPC sections. */
static nodemask_t sgx_numa_mask;
@ -403,14 +402,15 @@ skip:
spin_lock(&node->lock);
list_add_tail(&epc_page->list, &node->free_page_list);
sgx_nr_free_pages++;
spin_unlock(&node->lock);
atomic_long_inc(&sgx_nr_free_pages);
}
}
static bool sgx_should_reclaim(unsigned long watermark)
{
return sgx_nr_free_pages < watermark && !list_empty(&sgx_active_page_list);
return atomic_long_read(&sgx_nr_free_pages) < watermark &&
!list_empty(&sgx_active_page_list);
}
static int ksgxd(void *p)
@ -471,9 +471,9 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
page = list_first_entry(&node->free_page_list, struct sgx_epc_page, list);
list_del_init(&page->list);
sgx_nr_free_pages--;
spin_unlock(&node->lock);
atomic_long_dec(&sgx_nr_free_pages);
return page;
}
@ -625,9 +625,9 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
spin_lock(&node->lock);
list_add_tail(&page->list, &node->free_page_list);
sgx_nr_free_pages++;
spin_unlock(&node->lock);
atomic_long_inc(&sgx_nr_free_pages);
}
static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,

View File

@ -964,6 +964,9 @@ unsigned long __get_wchan(struct task_struct *p)
struct unwind_state state;
unsigned long addr = 0;
if (!try_get_task_stack(p))
return 0;
for (unwind_start(&state, p, NULL, NULL); !unwind_done(&state);
unwind_next_frame(&state)) {
addr = unwind_get_return_address(&state);
@ -974,6 +977,8 @@ unsigned long __get_wchan(struct task_struct *p)
break;
}
put_task_stack(p);
return addr;
}

View File

@ -742,6 +742,28 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
return 0;
}
static char *prepare_command_line(void)
{
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
parse_early_param();
return command_line;
}
/*
* Determine if we were loaded by an EFI loader. If so, then we have also been
* passed the efi memmap, systab, etc., so we should use these data structures
@ -830,6 +852,23 @@ void __init setup_arch(char **cmdline_p)
x86_init.oem.arch_setup();
/*
* x86_configure_nx() is called before parse_early_param() (called by
* prepare_command_line()) to detect whether hardware doesn't support
* NX (so that the early EHCI debug console setup can safely call
* set_fixmap()). It may then be called again from within noexec_setup()
* during parsing early parameters to honor the respective command line
* option.
*/
x86_configure_nx();
/*
* This parses early params and it needs to run before
* early_reserve_memory() because latter relies on such settings
* supplied as early params.
*/
*cmdline_p = prepare_command_line();
/*
* Do some memory reservations *before* memory is added to memblock, so
* memblock allocations won't overwrite it.
@ -863,33 +902,6 @@ void __init setup_arch(char **cmdline_p)
bss_resource.start = __pa_symbol(__bss_start);
bss_resource.end = __pa_symbol(__bss_stop)-1;
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
*cmdline_p = command_line;
/*
* x86_configure_nx() is called before parse_early_param() to detect
* whether hardware doesn't support NX (so that the early EHCI debug
* console setup can safely call set_fixmap()). It may then be called
* again from within noexec_setup() during parsing early parameters
* to honor the respective command line option.
*/
x86_configure_nx();
parse_early_param();
#ifdef CONFIG_MEMORY_HOTPLUG
/*
* Memory used by the kernel cannot be hot-removed because Linux

View File

@ -160,7 +160,7 @@ Efault_end:
user_access_end();
Efault:
pr_alert("could not access userspace vm86 info\n");
force_fatal_sig(SIGSEGV);
force_exit_sig(SIGSEGV);
goto exit_vm86;
}

View File

@ -125,7 +125,7 @@ static void kvm_update_kvm_cpuid_base(struct kvm_vcpu *vcpu)
}
}
struct kvm_cpuid_entry2 *kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu)
static struct kvm_cpuid_entry2 *kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu)
{
u32 base = vcpu->arch.kvm_cpuid_base;

View File

@ -2022,7 +2022,7 @@ static void kvm_hv_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result)
{
bool longmode;
longmode = is_64_bit_mode(vcpu);
longmode = is_64_bit_hypercall(vcpu);
if (longmode)
kvm_rax_write(vcpu, result);
else {
@ -2171,7 +2171,7 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
}
#ifdef CONFIG_X86_64
if (is_64_bit_mode(vcpu)) {
if (is_64_bit_hypercall(vcpu)) {
hc.param = kvm_rcx_read(vcpu);
hc.ingpa = kvm_rdx_read(vcpu);
hc.outgpa = kvm_r8_read(vcpu);

View File

@ -4682,6 +4682,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu,
/* PKEY and LA57 are active iff long mode is active. */
ext.cr4_pke = ____is_efer_lma(regs) && ____is_cr4_pke(regs);
ext.cr4_la57 = ____is_efer_lma(regs) && ____is_cr4_la57(regs);
ext.efer_lma = ____is_efer_lma(regs);
}
ext.valid = 1;

View File

@ -237,7 +237,6 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
bool es_active = argp->id == KVM_SEV_ES_INIT;
int asid, ret;
if (kvm->created_vcpus)
@ -247,7 +246,8 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (unlikely(sev->active))
return ret;
sev->es_active = es_active;
sev->active = true;
sev->es_active = argp->id == KVM_SEV_ES_INIT;
asid = sev_asid_new(sev);
if (asid < 0)
goto e_no_asid;
@ -257,8 +257,6 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (ret)
goto e_free;
sev->active = true;
sev->asid = asid;
INIT_LIST_HEAD(&sev->regions_list);
return 0;
@ -268,6 +266,7 @@ e_free:
sev->asid = 0;
e_no_asid:
sev->es_active = false;
sev->active = false;
return ret;
}
@ -1530,7 +1529,7 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error);
}
static bool cmd_allowed_from_miror(u32 cmd_id)
static bool is_cmd_allowed_from_mirror(u32 cmd_id)
{
/*
* Allow mirrors VM to call KVM_SEV_LAUNCH_UPDATE_VMSA to enable SEV-ES
@ -1757,7 +1756,7 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
/* Only the enc_context_owner handles some memory enc operations. */
if (is_mirroring_enc_context(kvm) &&
!cmd_allowed_from_miror(sev_cmd.id)) {
!is_cmd_allowed_from_mirror(sev_cmd.id)) {
r = -EINVAL;
goto out;
}
@ -1990,7 +1989,12 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
mutex_unlock(&source_kvm->lock);
mutex_lock(&kvm->lock);
if (sev_guest(kvm)) {
/*
* Disallow out-of-band SEV/SEV-ES init if the target is already an
* SEV guest, or if vCPUs have been created. KVM relies on vCPUs being
* created after SEV/SEV-ES initialization, e.g. to init intercepts.
*/
if (sev_guest(kvm) || kvm->created_vcpus) {
ret = -EINVAL;
goto e_mirror_unlock;
}

View File

@ -247,7 +247,7 @@ static __always_inline bool sev_es_guest(struct kvm *kvm)
#ifdef CONFIG_KVM_AMD_SEV
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
return sev_guest(kvm) && sev->es_active;
return sev->es_active && !WARN_ON_ONCE(!sev->active);
#else
return false;
#endif

View File

@ -670,33 +670,39 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
struct kvm_host_map map;
struct vmcs12 *shadow;
struct vcpu_vmx *vmx = to_vmx(vcpu);
struct gfn_to_hva_cache *ghc = &vmx->nested.shadow_vmcs12_cache;
if (!nested_cpu_has_shadow_vmcs(vmcs12) ||
vmcs12->vmcs_link_pointer == INVALID_GPA)
return;
shadow = get_shadow_vmcs12(vcpu);
if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), &map))
if (ghc->gpa != vmcs12->vmcs_link_pointer &&
kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc,
vmcs12->vmcs_link_pointer, VMCS12_SIZE))
return;
memcpy(shadow, map.hva, VMCS12_SIZE);
kvm_vcpu_unmap(vcpu, &map, false);
kvm_read_guest_cached(vmx->vcpu.kvm, ghc, get_shadow_vmcs12(vcpu),
VMCS12_SIZE);
}
static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
struct gfn_to_hva_cache *ghc = &vmx->nested.shadow_vmcs12_cache;
if (!nested_cpu_has_shadow_vmcs(vmcs12) ||
vmcs12->vmcs_link_pointer == INVALID_GPA)
return;
kvm_write_guest(vmx->vcpu.kvm, vmcs12->vmcs_link_pointer,
get_shadow_vmcs12(vcpu), VMCS12_SIZE);
if (ghc->gpa != vmcs12->vmcs_link_pointer &&
kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc,
vmcs12->vmcs_link_pointer, VMCS12_SIZE))
return;
kvm_write_guest_cached(vmx->vcpu.kvm, ghc, get_shadow_vmcs12(vcpu),
VMCS12_SIZE);
}
/*
@ -2830,6 +2836,17 @@ static int nested_vmx_check_controls(struct kvm_vcpu *vcpu,
return 0;
}
static int nested_vmx_check_address_space_size(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
#ifdef CONFIG_X86_64
if (CC(!!(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE) !=
!!(vcpu->arch.efer & EFER_LMA)))
return -EINVAL;
#endif
return 0;
}
static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
@ -2854,18 +2871,16 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
return -EINVAL;
#ifdef CONFIG_X86_64
ia32e = !!(vcpu->arch.efer & EFER_LMA);
ia32e = !!(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE);
#else
ia32e = false;
#endif
if (ia32e) {
if (CC(!(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE)) ||
CC(!(vmcs12->host_cr4 & X86_CR4_PAE)))
if (CC(!(vmcs12->host_cr4 & X86_CR4_PAE)))
return -EINVAL;
} else {
if (CC(vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE) ||
CC(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) ||
if (CC(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) ||
CC(vmcs12->host_cr4 & X86_CR4_PCIDE) ||
CC((vmcs12->host_rip) >> 32))
return -EINVAL;
@ -2910,9 +2925,9 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
int r = 0;
struct vmcs12 *shadow;
struct kvm_host_map map;
struct vcpu_vmx *vmx = to_vmx(vcpu);
struct gfn_to_hva_cache *ghc = &vmx->nested.shadow_vmcs12_cache;
struct vmcs_hdr hdr;
if (vmcs12->vmcs_link_pointer == INVALID_GPA)
return 0;
@ -2920,17 +2935,21 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu,
if (CC(!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)))
return -EINVAL;
if (CC(kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), &map)))
if (ghc->gpa != vmcs12->vmcs_link_pointer &&
CC(kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc,
vmcs12->vmcs_link_pointer, VMCS12_SIZE)))
return -EINVAL;
if (CC(kvm_read_guest_offset_cached(vcpu->kvm, ghc, &hdr,
offsetof(struct vmcs12, hdr),
sizeof(hdr))))
return -EINVAL;
shadow = map.hva;
if (CC(hdr.revision_id != VMCS12_REVISION) ||
CC(hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)))
return -EINVAL;
if (CC(shadow->hdr.revision_id != VMCS12_REVISION) ||
CC(shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)))
r = -EINVAL;
kvm_vcpu_unmap(vcpu, &map, false);
return r;
return 0;
}
/*
@ -3535,6 +3554,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
if (nested_vmx_check_controls(vcpu, vmcs12))
return nested_vmx_fail(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD);
if (nested_vmx_check_address_space_size(vcpu, vmcs12))
return nested_vmx_fail(vcpu, VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
if (nested_vmx_check_host_state(vcpu, vmcs12))
return nested_vmx_fail(vcpu, VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
@ -5264,10 +5286,11 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
return 1;
if (vmx->nested.current_vmptr != vmptr) {
struct kvm_host_map map;
struct vmcs12 *new_vmcs12;
struct gfn_to_hva_cache *ghc = &vmx->nested.vmcs12_cache;
struct vmcs_hdr hdr;
if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmptr), &map)) {
if (ghc->gpa != vmptr &&
kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) {
/*
* Reads from an unbacked page return all 1s,
* which means that the 32 bits located at the
@ -5278,12 +5301,16 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID);
}
new_vmcs12 = map.hva;
if (kvm_read_guest_offset_cached(vcpu->kvm, ghc, &hdr,
offsetof(struct vmcs12, hdr),
sizeof(hdr))) {
return nested_vmx_fail(vcpu,
VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID);
}
if (new_vmcs12->hdr.revision_id != VMCS12_REVISION ||
(new_vmcs12->hdr.shadow_vmcs &&
if (hdr.revision_id != VMCS12_REVISION ||
(hdr.shadow_vmcs &&
!nested_cpu_has_vmx_shadow_vmcs(vcpu))) {
kvm_vcpu_unmap(vcpu, &map, false);
return nested_vmx_fail(vcpu,
VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID);
}
@ -5294,8 +5321,11 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
* Load VMCS12 from guest memory since it is not already
* cached.
*/
memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE);
kvm_vcpu_unmap(vcpu, &map, false);
if (kvm_read_guest_cached(vcpu->kvm, ghc, vmx->nested.cached_vmcs12,
VMCS12_SIZE)) {
return nested_vmx_fail(vcpu,
VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID);
}
set_current_vmptr(vmx, vmptr);
}

View File

@ -141,6 +141,16 @@ struct nested_vmx {
*/
struct vmcs12 *cached_shadow_vmcs12;
/*
* GPA to HVA cache for accessing vmcs12->vmcs_link_pointer
*/
struct gfn_to_hva_cache shadow_vmcs12_cache;
/*
* GPA to HVA cache for VMCS12
*/
struct gfn_to_hva_cache vmcs12_cache;
/*
* Indicates if the shadow vmcs or enlightened vmcs must be updated
* with the data held by struct vmcs12.

View File

@ -3307,9 +3307,9 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
"xor %1, %1\n"
"2:\n"
_ASM_EXTABLE_UA(1b, 2b)
: "+r" (st_preempted),
"+&r" (err)
: "m" (st->preempted));
: "+q" (st_preempted),
"+&r" (err),
"+m" (st->preempted));
if (err)
goto out;
@ -4179,7 +4179,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = !static_call(kvm_x86_cpu_has_accelerated_tpr)();
break;
case KVM_CAP_NR_VCPUS:
r = num_online_cpus();
r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS);
break;
case KVM_CAP_MAX_VCPUS:
r = KVM_MAX_VCPUS;
@ -8848,7 +8848,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
trace_kvm_hypercall(nr, a0, a1, a2, a3);
op_64_bit = is_64_bit_mode(vcpu);
op_64_bit = is_64_bit_hypercall(vcpu);
if (!op_64_bit) {
nr &= 0xFFFFFFFF;
a0 &= 0xFFFFFFFF;
@ -9547,12 +9547,16 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
if (!kvm_apic_hw_enabled(vcpu->arch.apic))
return;
if (to_hv_vcpu(vcpu))
if (to_hv_vcpu(vcpu)) {
bitmap_or((ulong *)eoi_exit_bitmap,
vcpu->arch.ioapic_handled_vectors,
to_hv_synic(vcpu)->vec_bitmap, 256);
static_call(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
return;
}
static_call(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
static_call(kvm_x86_load_eoi_exitmap)(
vcpu, (u64 *)vcpu->arch.ioapic_handled_vectors);
}
void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,

View File

@ -153,12 +153,24 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu)
{
int cs_db, cs_l;
WARN_ON_ONCE(vcpu->arch.guest_state_protected);
if (!is_long_mode(vcpu))
return false;
static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
return cs_l;
}
static inline bool is_64_bit_hypercall(struct kvm_vcpu *vcpu)
{
/*
* If running with protected guest state, the CS register is not
* accessible. The hypercall register values will have had to been
* provided in 64-bit mode, so assume the guest is in 64-bit.
*/
return vcpu->arch.guest_state_protected || is_64_bit_mode(vcpu);
}
static inline bool x86_exception_has_error_code(unsigned int vector)
{
static u32 exception_has_error_code = BIT(DF_VECTOR) | BIT(TS_VECTOR) |

View File

@ -127,9 +127,9 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
state_entry_time = vx->runstate_entry_time;
state_entry_time |= XEN_RUNSTATE_UPDATE;
BUILD_BUG_ON(sizeof(((struct vcpu_runstate_info *)0)->state_entry_time) !=
BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state_entry_time) !=
sizeof(state_entry_time));
BUILD_BUG_ON(sizeof(((struct compat_vcpu_runstate_info *)0)->state_entry_time) !=
BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state_entry_time) !=
sizeof(state_entry_time));
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
@ -144,9 +144,9 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
*/
BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) !=
offsetof(struct compat_vcpu_runstate_info, state));
BUILD_BUG_ON(sizeof(((struct vcpu_runstate_info *)0)->state) !=
BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state) !=
sizeof(vx->current_runstate));
BUILD_BUG_ON(sizeof(((struct compat_vcpu_runstate_info *)0)->state) !=
BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state) !=
sizeof(vx->current_runstate));
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
@ -163,9 +163,9 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
offsetof(struct vcpu_runstate_info, time) - sizeof(u64));
BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state_entry_time) !=
offsetof(struct compat_vcpu_runstate_info, time) - sizeof(u64));
BUILD_BUG_ON(sizeof(((struct vcpu_runstate_info *)0)->time) !=
sizeof(((struct compat_vcpu_runstate_info *)0)->time));
BUILD_BUG_ON(sizeof(((struct vcpu_runstate_info *)0)->time) !=
BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
sizeof_field(struct compat_vcpu_runstate_info, time));
BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
sizeof(vx->runstate_times));
if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
@ -205,9 +205,9 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
BUILD_BUG_ON(offsetof(struct vcpu_info, evtchn_upcall_pending) !=
offsetof(struct compat_vcpu_info, evtchn_upcall_pending));
BUILD_BUG_ON(sizeof(rc) !=
sizeof(((struct vcpu_info *)0)->evtchn_upcall_pending));
sizeof_field(struct vcpu_info, evtchn_upcall_pending));
BUILD_BUG_ON(sizeof(rc) !=
sizeof(((struct compat_vcpu_info *)0)->evtchn_upcall_pending));
sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending));
/*
* For efficiency, this mirrors the checks for using the valid
@ -299,7 +299,7 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
break;
case KVM_XEN_ATTR_TYPE_SHARED_INFO:
data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_gfn);
data->u.shared_info.gfn = kvm->arch.xen.shinfo_gfn;
r = 0;
break;
@ -698,7 +698,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
kvm_hv_hypercall_enabled(vcpu))
return kvm_hv_hypercall(vcpu);
longmode = is_64_bit_mode(vcpu);
longmode = is_64_bit_hypercall(vcpu);
if (!longmode) {
params[0] = (u32)kvm_rbx_read(vcpu);
params[1] = (u32)kvm_rcx_read(vcpu);

View File

@ -640,7 +640,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
*/
ret = blk_queue_enter(q, 0);
if (ret)
return ret;
goto fail;
rcu_read_lock();
spin_lock_irq(&q->queue_lock);
@ -676,13 +676,13 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
new_blkg = blkg_alloc(pos, q, GFP_KERNEL);
if (unlikely(!new_blkg)) {
ret = -ENOMEM;
goto fail;
goto fail_exit_queue;
}
if (radix_tree_preload(GFP_KERNEL)) {
blkg_free(new_blkg);
ret = -ENOMEM;
goto fail;
goto fail_exit_queue;
}
rcu_read_lock();
@ -722,9 +722,10 @@ fail_preloaded:
fail_unlock:
spin_unlock_irq(&q->queue_lock);
rcu_read_unlock();
fail_exit_queue:
blk_queue_exit(q);
fail:
blkdev_put_no_open(bdev);
blk_queue_exit(q);
/*
* If queue was bypassing, we should retry. Do so after a
* short msleep(). It isn't strictly necessary but queue

View File

@ -363,8 +363,10 @@ void blk_cleanup_queue(struct request_queue *q)
blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
blk_sync_queue(q);
if (queue_is_mq(q))
if (queue_is_mq(q)) {
blk_mq_cancel_work_sync(q);
blk_mq_exit_queue(q);
}
/*
* In theory, request pool of sched_tags belongs to request queue.

View File

@ -379,7 +379,7 @@ static void mq_flush_data_end_io(struct request *rq, blk_status_t error)
* @rq is being submitted. Analyze what needs to be done and put it on the
* right queue.
*/
bool blk_insert_flush(struct request *rq)
void blk_insert_flush(struct request *rq)
{
struct request_queue *q = rq->q;
unsigned long fflags = q->queue_flags; /* may change, cache */
@ -409,7 +409,7 @@ bool blk_insert_flush(struct request *rq)
*/
if (!policy) {
blk_mq_end_request(rq, 0);
return true;
return;
}
BUG_ON(rq->bio != rq->biotail); /*assumes zero or single bio rq */
@ -420,8 +420,10 @@ bool blk_insert_flush(struct request *rq)
* for normal execution.
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH)))
return false;
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
blk_mq_request_bypass_insert(rq, false, true);
return;
}
/*
* @rq should go through flush machinery. Mark it part of flush
@ -437,8 +439,6 @@ bool blk_insert_flush(struct request *rq)
spin_lock_irq(&fq->mq_flush_lock);
blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0);
spin_unlock_irq(&fq->mq_flush_lock);
return true;
}
/**

View File

@ -2543,8 +2543,7 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q,
return NULL;
}
static inline bool blk_mq_can_use_cached_rq(struct request *rq,
struct bio *bio)
static inline bool blk_mq_can_use_cached_rq(struct request *rq, struct bio *bio)
{
if (blk_mq_get_hctx_type(bio->bi_opf) != rq->mq_hctx->type)
return false;
@ -2565,7 +2564,6 @@ static inline struct request *blk_mq_get_request(struct request_queue *q,
bool checked = false;
if (plug) {
rq = rq_list_peek(&plug->cached_rq);
if (rq && rq->q == q) {
if (unlikely(!submit_bio_checks(bio)))
@ -2587,12 +2585,14 @@ static inline struct request *blk_mq_get_request(struct request_queue *q,
fallback:
if (unlikely(bio_queue_enter(bio)))
return NULL;
if (!checked && !submit_bio_checks(bio))
return NULL;
if (unlikely(!checked && !submit_bio_checks(bio)))
goto out_put;
rq = blk_mq_get_new_requests(q, plug, bio, nsegs, same_queue_rq);
if (!rq)
blk_queue_exit(q);
return rq;
if (rq)
return rq;
out_put:
blk_queue_exit(q);
return NULL;
}
/**
@ -2647,8 +2647,10 @@ void blk_mq_submit_bio(struct bio *bio)
return;
}
if (op_is_flush(bio->bi_opf) && blk_insert_flush(rq))
if (op_is_flush(bio->bi_opf)) {
blk_insert_flush(rq);
return;
}
if (plug && (q->nr_hw_queues == 1 ||
blk_mq_is_shared_tags(rq->mq_hctx->flags) ||
@ -4417,6 +4419,19 @@ unsigned int blk_mq_rq_cpu(struct request *rq)
}
EXPORT_SYMBOL(blk_mq_rq_cpu);
void blk_mq_cancel_work_sync(struct request_queue *q)
{
if (queue_is_mq(q)) {
struct blk_mq_hw_ctx *hctx;
int i;
cancel_delayed_work_sync(&q->requeue_work);
queue_for_each_hw_ctx(q, hctx, i)
cancel_delayed_work_sync(&hctx->run_work);
}
}
static int __init blk_mq_init(void)
{
int i;

Some files were not shown because too many files have changed in this diff Show More