Merge branch 'for-6.13/steelseries' into for-linus

- bugfixes for Steelseries Arctis 1 battery level handling (Bastien Nocera)
This commit is contained in:
Jiri Kosina 2024-11-18 21:31:09 +01:00
commit 873c578324
563 changed files with 6082 additions and 3192 deletions

View File

@ -73,6 +73,8 @@ Andrey Ryabinin <ryabinin.a.a@gmail.com> <aryabinin@virtuozzo.com>
Andrzej Hajda <andrzej.hajda@intel.com> <a.hajda@samsung.com>
André Almeida <andrealmeid@igalia.com> <andrealmeid@collabora.com>
Andy Adamson <andros@citi.umich.edu>
Andy Chiu <andybnac@gmail.com> <andy.chiu@sifive.com>
Andy Chiu <andybnac@gmail.com> <taochiu@synology.com>
Andy Shevchenko <andy@kernel.org> <andy@smile.org.ua>
Andy Shevchenko <andy@kernel.org> <ext-andriy.shevchenko@nokia.com>
Anilkumar Kolli <quic_akolli@quicinc.com> <akolli@codeaurora.org>

View File

@ -223,7 +223,10 @@ are signed through the PKCS#7 message format to enforce some level of
authorization of the policies (prohibiting an attacker from gaining
unconstrained root, and deploying an "allow all" policy). These
policies must be signed by a certificate that chains to the
``SYSTEM_TRUSTED_KEYRING``. With openssl, the policy can be signed by::
``SYSTEM_TRUSTED_KEYRING``, or to the secondary and/or platform keyrings if
``CONFIG_IPE_POLICY_SIG_SECONDARY_KEYRING`` and/or
``CONFIG_IPE_POLICY_SIG_PLATFORM_KEYRING`` are enabled, respectively.
With openssl, the policy can be signed by::
openssl smime -sign \
-in "$MY_POLICY" \
@ -266,7 +269,7 @@ in the kernel. This file is write-only and accepts a PKCS#7 signed
policy. Two checks will always be performed on this policy: First, the
``policy_names`` must match with the updated version and the existing
version. Second the updated policy must have a policy version greater than
or equal to the currently-running version. This is to prevent rollback attacks.
the currently-running version. This is to prevent rollback attacks.
The ``delete`` file is used to remove a policy that is no longer needed.
This file is write-only and accepts a value of ``1`` to delete the policy.

View File

@ -12,7 +12,10 @@ Pkeys Userspace (PKU) is a feature which can be found on:
* Intel server CPUs, Skylake and later
* Intel client CPUs, Tiger Lake (11th Gen Core) and later
* Future AMD CPUs
* arm64 CPUs implementing the Permission Overlay Extension (FEAT_S1POE)
x86_64
======
Pkeys work by dedicating 4 previously Reserved bits in each page table entry to
a "protection key", giving 16 possible keys.
@ -28,6 +31,22 @@ register. The feature is only available in 64-bit mode, even though there is
theoretically space in the PAE PTEs. These permissions are enforced on data
access only and have no effect on instruction fetches.
arm64
=====
Pkeys use 3 bits in each page table entry, to encode a "protection key index",
giving 8 possible keys.
Protections for each key are defined with a per-CPU user-writable system
register (POR_EL0). This is a 64-bit register encoding read, write and execute
overlay permissions for each protection key index.
Being a CPU register, POR_EL0 is inherently thread-local, potentially giving
each thread a different set of protections from every other thread.
Unlike x86_64, the protection key permissions also apply to instruction
fetches.
Syscalls
========
@ -38,11 +57,10 @@ There are 3 system calls which directly interact with pkeys::
int pkey_mprotect(unsigned long start, size_t len,
unsigned long prot, int pkey);
Before a pkey can be used, it must first be allocated with
pkey_alloc(). An application calls the WRPKRU instruction
directly in order to change access permissions to memory covered
with a key. In this example WRPKRU is wrapped by a C function
called pkey_set().
Before a pkey can be used, it must first be allocated with pkey_alloc(). An
application writes to the architecture specific CPU register directly in order
to change access permissions to memory covered with a key. In this example
this is wrapped by a C function called pkey_set().
::
int real_prot = PROT_READ|PROT_WRITE;
@ -64,9 +82,9 @@ is no longer in use::
munmap(ptr, PAGE_SIZE);
pkey_free(pkey);
.. note:: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
An example implementation can be found in
tools/testing/selftests/x86/protection_keys.c.
.. note:: pkey_set() is a wrapper around writing to the CPU register.
Example implementations can be found in
tools/testing/selftests/mm/pkey-{arm64,powerpc,x86}.h
Behavior
========
@ -96,3 +114,7 @@ with a read()::
The kernel will send a SIGSEGV in both cases, but si_code will be set
to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
the plain mprotect() permissions are violated.
Note that kernel accesses from a kthread (such as io_uring) will use a default
value for the protection key register and so will not be consistent with
userspace's value of the register or mprotect().

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/iio/dac/adi,ad5686.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Analog Devices AD5360 and similar DACs
title: Analog Devices AD5360 and similar SPI DACs
maintainers:
- Michael Hennerich <michael.hennerich@analog.com>
@ -12,41 +12,22 @@ maintainers:
properties:
compatible:
oneOf:
- description: SPI devices
enum:
- adi,ad5310r
- adi,ad5672r
- adi,ad5674r
- adi,ad5676
- adi,ad5676r
- adi,ad5679r
- adi,ad5681r
- adi,ad5682r
- adi,ad5683
- adi,ad5683r
- adi,ad5684
- adi,ad5684r
- adi,ad5685r
- adi,ad5686
- adi,ad5686r
- description: I2C devices
enum:
- adi,ad5311r
- adi,ad5337r
- adi,ad5338r
- adi,ad5671r
- adi,ad5675r
- adi,ad5691r
- adi,ad5692r
- adi,ad5693
- adi,ad5693r
- adi,ad5694
- adi,ad5694r
- adi,ad5695r
- adi,ad5696
- adi,ad5696r
enum:
- adi,ad5310r
- adi,ad5672r
- adi,ad5674r
- adi,ad5676
- adi,ad5676r
- adi,ad5679r
- adi,ad5681r
- adi,ad5682r
- adi,ad5683
- adi,ad5683r
- adi,ad5684
- adi,ad5684r
- adi,ad5685r
- adi,ad5686
- adi,ad5686r
reg:
maxItems: 1

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/iio/dac/adi,ad5696.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Analog Devices AD5696 and similar multi-channel DACs
title: Analog Devices AD5696 and similar I2C multi-channel DACs
maintainers:
- Michael Auchter <michael.auchter@ni.com>
@ -16,6 +16,7 @@ properties:
compatible:
enum:
- adi,ad5311r
- adi,ad5337r
- adi,ad5338r
- adi,ad5671r
- adi,ad5675r

View File

@ -26,6 +26,7 @@ properties:
- brcm,asp-v2.1-mdio
- brcm,asp-v2.2-mdio
- brcm,unimac-mdio
- brcm,bcm6846-mdio
reg:
minItems: 1

View File

@ -208,7 +208,7 @@ The filesystem must arrange to `cancel
such `reservations
<https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/>`_
because writeback will not consume the reservation.
The ``iomap_file_buffered_write_punch_delalloc`` can be called from a
The ``iomap_write_delalloc_release`` can be called from a
``->iomap_end`` function to find all the clean areas of the folios
caching a fresh (``IOMAP_F_NEW``) delalloc mapping.
It takes the ``invalidate_lock``.

View File

@ -592,4 +592,3 @@ API Function Reference
.. kernel-doc:: include/linux/netfs.h
.. kernel-doc:: fs/netfs/buffered_read.c
.. kernel-doc:: fs/netfs/io.c

View File

@ -7,26 +7,26 @@ The DAMON subsystem covers the files that are listed in 'DATA ACCESS MONITOR'
section of 'MAINTAINERS' file.
The mailing lists for the subsystem are damon@lists.linux.dev and
linux-mm@kvack.org. Patches should be made against the mm-unstable `tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>` whenever possible and posted to
the mailing lists.
linux-mm@kvack.org. Patches should be made against the `mm-unstable tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ whenever possible and posted
to the mailing lists.
SCM Trees
---------
There are multiple Linux trees for DAMON development. Patches under
development or testing are queued in `damon/next
<https://git.kernel.org/sj/h/damon/next>` by the DAMON maintainer.
<https://git.kernel.org/sj/h/damon/next>`_ by the DAMON maintainer.
Sufficiently reviewed patches will be queued in `mm-unstable
<https://git.kernel.org/akpm/mm/h/mm-unstable>` by the memory management
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ by the memory management
subsystem maintainer. After more sufficient tests, the patches will be queued
in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>` , and finally
in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_, and finally
pull-requested to the mainline by the memory management subsystem maintainer.
Note again the patches for mm-unstable `tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>` are queued by the memory
Note again the patches for `mm-unstable tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ are queued by the memory
management subsystem maintainer. If the patches requires some patches in
damon/next `tree <https://git.kernel.org/sj/h/damon/next>` which not yet merged
`damon/next tree <https://git.kernel.org/sj/h/damon/next>`_ which not yet merged
in mm-unstable, please make sure the requirement is clearly specified.
Submit checklist addendum
@ -37,25 +37,25 @@ When making DAMON changes, you should do below.
- Build changes related outputs including kernel and documents.
- Ensure the builds introduce no new errors or warnings.
- Run and ensure no new failures for DAMON `selftests
<https://github.com/awslabs/damon-tests/blob/master/corr/run.sh#L49>` and
<https://github.com/damonitor/damon-tests/blob/master/corr/run.sh#L49>`_ and
`kunittests
<https://github.com/awslabs/damon-tests/blob/master/corr/tests/kunit.sh>`.
<https://github.com/damonitor/damon-tests/blob/master/corr/tests/kunit.sh>`_.
Further doing below and putting the results will be helpful.
- Run `damon-tests/corr
<https://github.com/awslabs/damon-tests/tree/master/corr>` for normal
<https://github.com/damonitor/damon-tests/tree/master/corr>`_ for normal
changes.
- Run `damon-tests/perf
<https://github.com/awslabs/damon-tests/tree/master/perf>` for performance
<https://github.com/damonitor/damon-tests/tree/master/perf>`_ for performance
changes.
Key cycle dates
---------------
Patches can be sent anytime. Key cycle dates of the `mm-unstable
<https://git.kernel.org/akpm/mm/h/mm-unstable>` and `mm-stable
<https://git.kernel.org/akpm/mm/h/mm-stable>` trees depend on the memory
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable
<https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory
management subsystem maintainer.
Review cadence
@ -72,13 +72,13 @@ Mailing tool
Like many other Linux kernel subsystems, DAMON uses the mailing lists
(damon@lists.linux.dev and linux-mm@kvack.org) as the major communication
channel. There is a simple tool called `HacKerMaiL
<https://github.com/damonitor/hackermail>` (``hkml``), which is for people who
<https://github.com/damonitor/hackermail>`_ (``hkml``), which is for people who
are not very familiar with the mailing lists based communication. The tool
could be particularly helpful for DAMON community members since it is developed
and maintained by DAMON maintainer. The tool is also officially announced to
support DAMON and general Linux kernel development workflow.
In other words, `hkml <https://github.com/damonitor/hackermail>` is a mailing
In other words, `hkml <https://github.com/damonitor/hackermail>`_ is a mailing
tool for DAMON community, which DAMON maintainer is committed to support.
Please feel free to try and report issues or feature requests for the tool to
the maintainer.
@ -98,8 +98,8 @@ slots, and attendees should reserve one of those at least 24 hours before the
time slot, by reaching out to the maintainer.
Schedules and available reservation time slots are available at the Google `doc
<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`.
<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`_.
There is also a public Google `calendar
<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`
<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`_
that has the events. Anyone can subscribe it. DAMON maintainer will also
provide periodic reminder to the mailing list (damon@lists.linux.dev).

View File

@ -30,10 +30,13 @@ tree as a dedicated branch covering multiple subsystems.
The main SoC tree is housed on git.kernel.org:
https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git/
Maintainers
-----------
Clearly this is quite a wide range of topics, which no one person, or even
small group of people are capable of maintaining. Instead, the SoC subsystem
is comprised of many submaintainers, each taking care of individual platforms
and driver subdirectories.
is comprised of many submaintainers (platform maintainers), each taking care of
individual platforms and driver subdirectories.
In this regard, "platform" usually refers to a series of SoCs from a given
vendor, for example, Nvidia's series of Tegra SoCs. Many submaintainers operate
on a vendor level, responsible for multiple product lines. For several reasons,
@ -43,14 +46,43 @@ MAINTAINERS file.
Most of these submaintainers have their own trees where they stage patches,
sending pull requests to the main SoC tree. These trees are usually, but not
always, listed in MAINTAINERS. The main SoC maintainers can be reached via the
alias soc@kernel.org if there is no platform-specific maintainer, or if they
are unresponsive.
always, listed in MAINTAINERS.
What the SoC tree is not, however, is a location for architecture-specific code
changes. Each architecture has its own maintainers that are responsible for
architectural details, CPU errata and the like.
Submitting Patches for Given SoC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All typical platform related patches should be sent via SoC submaintainers
(platform-specific maintainers). This includes also changes to per-platform or
shared defconfigs (scripts/get_maintainer.pl might not provide correct
addresses in such case).
Submitting Patches to the Main SoC Maintainers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The main SoC maintainers can be reached via the alias soc@kernel.org only in
following cases:
1. There are no platform-specific maintainers.
2. Platform-specific maintainers are unresponsive.
3. Introducing a completely new SoC platform. Such new SoC work should be sent
first to common mailing lists, pointed out by scripts/get_maintainer.pl, for
community review. After positive community review, work should be sent to
soc@kernel.org in one patchset containing new arch/foo/Kconfig entry, DTS
files, MAINTAINERS file entry and optionally initial drivers with their
Devicetree bindings. The MAINTAINERS file entry should list new
platform-specific maintainers, who are going to be responsible for handling
patches for the platform from now on.
Note that the soc@kernel.org is usually not the place to discuss the patches,
thus work sent to this address should be already considered as acceptable by
the community.
Information for (new) Submaintainers
------------------------------------

View File

@ -8098,13 +8098,15 @@ KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is
disabled.
KVM_X86_QUIRK_SLOT_ZAP_ALL By default, KVM invalidates all SPTEs in
fast way for memslot deletion when VM type
is KVM_X86_DEFAULT_VM.
When this quirk is disabled or when VM type
is other than KVM_X86_DEFAULT_VM, KVM zaps
only leaf SPTEs that are within the range of
the memslot being deleted.
KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM
invalidates all SPTEs in all memslots and
address spaces when a memslot is deleted or
moved. When this quirk is disabled (or the
VM type isn't KVM_X86_DEFAULT_VM), KVM only
ensures the backing memory of the deleted
or moved memslot isn't reachable, i.e KVM
_may_ invalidate only SPTEs related to the
memslot.
=================================== ============================================
7.32 KVM_CAP_MAX_VCPU_ID

View File

@ -136,7 +136,7 @@ For direct sp, we can easily avoid it since the spte of direct sp is fixed
to gfn. For indirect sp, we disabled fast page fault for simplicity.
A solution for indirect sp could be to pin the gfn, for example via
kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning:
gfn_to_pfn_memslot_atomic, before the cmpxchg. After the pinning:
- We have held the refcount of pfn; that means the pfn can not be freed and
be reused for another gfn.

View File

@ -258,12 +258,6 @@ L: linux-acenic@sunsite.dk
S: Maintained
F: drivers/net/ethernet/alteon/acenic*
ACER ASPIRE 1 EMBEDDED CONTROLLER DRIVER
M: Nikita Travkin <nikita@trvn.ru>
S: Maintained
F: Documentation/devicetree/bindings/platform/acer,aspire1-ec.yaml
F: drivers/platform/arm64/acer-aspire1-ec.c
ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER
M: Peter Kaestle <peter@piie.net>
L: platform-driver-x86@vger.kernel.org
@ -888,7 +882,6 @@ F: drivers/staging/media/sunxi/cedrus/
ALPHA PORT
M: Richard Henderson <richard.henderson@linaro.org>
M: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
M: Matt Turner <mattst88@gmail.com>
L: linux-alpha@vger.kernel.org
S: Odd Fixes
@ -1761,8 +1754,8 @@ F: include/uapi/linux/if_arcnet.h
ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS)
M: Arnd Bergmann <arnd@arndb.de>
M: Olof Johansson <olof@lixom.net>
M: soc@kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: soc@lists.linux.dev
S: Maintained
P: Documentation/process/maintainer-soc.rst
C: irc://irc.libera.chat/armlinux
@ -2263,12 +2256,6 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-ep93xx/ts72xx.c
ARM/CIRRUS LOGIC CLPS711X ARM ARCHITECTURE
M: Alexander Shiyan <shc_work@mail.ru>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Odd Fixes
N: clps711x
ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE
M: Hartley Sweeten <hsweeten@visionengravers.com>
M: Alexander Sverdlin <alexander.sverdlin@gmail.com>
@ -3815,14 +3802,6 @@ F: drivers/video/backlight/
F: include/linux/backlight.h
F: include/linux/pwm_backlight.h
BAIKAL-T1 PVT HARDWARE MONITOR DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-hwmon@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/hwmon/baikal,bt1-pvt.yaml
F: Documentation/hwmon/bt1-pvt.rst
F: drivers/hwmon/bt1-pvt.[ch]
BARCO P50 GPIO DRIVER
M: Santosh Kumar Yadav <santoshkumar.yadav@barco.com>
M: Peter Korsgaard <peter.korsgaard@barco.com>
@ -6476,7 +6455,6 @@ F: drivers/mtd/nand/raw/denali*
DESIGNWARE EDMA CORE IP DRIVER
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
R: Serge Semin <fancer.lancer@gmail.com>
L: dmaengine@vger.kernel.org
S: Maintained
F: drivers/dma/dw-edma/
@ -9759,14 +9737,6 @@ F: drivers/gpio/gpiolib-cdev.c
F: include/uapi/linux/gpio.h
F: tools/gpio/
GRE DEMULTIPLEXER DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
F: include/net/gre.h
F: net/ipv4/gre_demux.c
F: net/ipv4/gre_offload.c
GRETH 10/100/1G Ethernet MAC device driver
M: Andreas Larsson <andreas@gaisler.com>
L: netdev@vger.kernel.org
@ -11283,10 +11253,10 @@ F: security/integrity/
F: security/integrity/ima/
INTEGRITY POLICY ENFORCEMENT (IPE)
M: Fan Wu <wufan@linux.microsoft.com>
M: Fan Wu <wufan@kernel.org>
L: linux-security-module@vger.kernel.org
S: Supported
T: git https://github.com/microsoft/ipe.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wufan/ipe.git
F: Documentation/admin-guide/LSM/ipe.rst
F: Documentation/security/ipe.rst
F: scripts/ipe/
@ -11604,6 +11574,16 @@ F: drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
F: drivers/crypto/intel/keembay/ocs-hcu.c
F: drivers/crypto/intel/keembay/ocs-hcu.h
INTEL LA JOLLA COVE ADAPTER (LJCA) USB I/O EXPANDER DRIVERS
M: Wentong Wu <wentong.wu@intel.com>
M: Sakari Ailus <sakari.ailus@linux.intel.com>
S: Maintained
F: drivers/gpio/gpio-ljca.c
F: drivers/i2c/busses/i2c-ljca.c
F: drivers/spi/spi-ljca.c
F: drivers/usb/misc/usb-ljca.c
F: include/linux/usb/ljca.h
INTEL MANAGEMENT ENGINE (mei)
M: Tomas Winkler <tomas.winkler@intel.com>
L: linux-kernel@vger.kernel.org
@ -12242,6 +12222,7 @@ R: Dmitry Vyukov <dvyukov@google.com>
R: Vincenzo Frascino <vincenzo.frascino@arm.com>
L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
F: arch/*/include/asm/*kasan.h
F: arch/*/mm/kasan_init*
@ -12265,6 +12246,7 @@ R: Dmitry Vyukov <dvyukov@google.com>
R: Andrey Konovalov <andreyknvl@gmail.com>
L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kcov.rst
F: include/linux/kcov.h
F: include/uapi/linux/kcov.h
@ -12947,12 +12929,6 @@ S: Maintained
F: drivers/ata/pata_arasan_cf.c
F: include/linux/pata_arasan_cf_data.h
LIBATA PATA DRIVERS
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: linux-ide@vger.kernel.org
F: drivers/ata/ata_*.c
F: drivers/ata/pata_*.c
LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS
M: Linus Walleij <linus.walleij@linaro.org>
L: linux-ide@vger.kernel.org
@ -12969,14 +12945,6 @@ F: drivers/ata/ahci_platform.c
F: drivers/ata/libahci_platform.c
F: include/linux/ahci_platform.h
LIBATA SATA AHCI SYNOPSYS DWC CONTROLLER DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-ide@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/ata/baikal,bt1-ahci.yaml
F: Documentation/devicetree/bindings/ata/snps,dwc-ahci.yaml
F: drivers/ata/ahci_dwc.c
LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER
M: Mikael Pettersson <mikpelinux@gmail.com>
L: linux-ide@vger.kernel.org
@ -14172,16 +14140,6 @@ S: Maintained
T: git git://linuxtv.org/media_tree.git
F: drivers/media/platform/nxp/imx-pxp.[ch]
MEDIA DRIVERS FOR ASCOT2E
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/ascot2e*
MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS
M: Jasmin Jessich <jasmin@anw.at>
L: linux-media@vger.kernel.org
@ -14190,16 +14148,6 @@ W: https://linuxtv.org
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/cxd2099*
MEDIA DRIVERS FOR CXD2841ER
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/cxd2841er*
MEDIA DRIVERS FOR CXD2880
M: Yasunari Takiguchi <Yasunari.Takiguchi@sony.com>
L: linux-media@vger.kernel.org
@ -14244,35 +14192,6 @@ F: drivers/media/platform/nxp/imx-mipi-csis.c
F: drivers/media/platform/nxp/imx7-media-csi.c
F: drivers/media/platform/nxp/imx8mq-mipi-csi2.c
MEDIA DRIVERS FOR HELENE
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/helene*
MEDIA DRIVERS FOR HORUS3A
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/horus3a*
MEDIA DRIVERS FOR LNBH25
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/lnbh25*
MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS
L: linux-media@vger.kernel.org
S: Orphan
@ -14280,16 +14199,6 @@ W: https://linuxtv.org
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/mxl5xx*
MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/pci/netup_unidvb/*
MEDIA DRIVERS FOR NVIDIA TEGRA - VDE
M: Dmitry Osipenko <digetx@gmail.com>
L: linux-media@vger.kernel.org
@ -14907,9 +14816,10 @@ N: include/linux/page[-_]*
MEMORY MAPPING
M: Andrew Morton <akpm@linux-foundation.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
S: Maintained
W: http://www.linux-mm.org
@ -14932,13 +14842,6 @@ F: drivers/mtd/
F: include/linux/mtd/
F: include/uapi/mtd/
MEMSENSING MICROSYSTEMS MSA311 DRIVER
M: Dmitry Rokosov <ddrokosov@sberdevices.ru>
L: linux-iio@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/iio/accel/memsensing,msa311.yaml
F: drivers/iio/accel/msa311.c
MEN A21 WATCHDOG DRIVER
M: Johannes Thumshirn <morbidrsa@gmail.com>
L: linux-watchdog@vger.kernel.org
@ -15272,7 +15175,6 @@ F: drivers/tty/serial/8250/8250_pci1xxxx.c
MICROCHIP POLARFIRE FPGA DRIVERS
M: Conor Dooley <conor.dooley@microchip.com>
R: Vladimir Georgiev <v.georgiev@metrotek.ru>
L: linux-fpga@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml
@ -15527,17 +15429,6 @@ F: arch/mips/
F: drivers/platform/mips/
F: include/dt-bindings/mips/
MIPS BAIKAL-T1 PLATFORM
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-mips@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/bus/baikal,bt1-*.yaml
F: Documentation/devicetree/bindings/clock/baikal,bt1-*.yaml
F: drivers/bus/bt1-*.c
F: drivers/clk/baikal-t1/
F: drivers/memory/bt1-l2-ctl.c
F: drivers/mtd/maps/physmap-bt1-rom.[ch]
MIPS BOSTON DEVELOPMENT BOARD
M: Paul Burton <paulburton@kernel.org>
L: linux-mips@vger.kernel.org
@ -15550,7 +15441,6 @@ F: include/dt-bindings/clock/boston-clock.h
MIPS CORE DRIVERS
M: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-mips@vger.kernel.org
S: Supported
F: drivers/bus/mips_cdmm.c
@ -16086,6 +15976,7 @@ F: include/uapi/linux/net_dropmon.h
F: net/core/drop_monitor.c
NETWORKING DRIVERS
M: Andrew Lunn <andrew+netdev@lunn.ch>
M: "David S. Miller" <davem@davemloft.net>
M: Eric Dumazet <edumazet@google.com>
M: Jakub Kicinski <kuba@kernel.org>
@ -16517,12 +16408,6 @@ F: include/linux/ntb.h
F: include/linux/ntb_transport.h
F: tools/testing/selftests/ntb/
NTB IDT DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: ntb@lists.linux.dev
S: Supported
F: drivers/ntb/hw/idt/
NTB INTEL DRIVER
M: Dave Jiang <dave.jiang@intel.com>
L: ntb@lists.linux.dev
@ -18543,13 +18428,6 @@ F: drivers/pps/
F: include/linux/pps*.h
F: include/uapi/linux/pps.h
PPTP DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
W: http://sourceforge.net/projects/accel-pptp
F: drivers/net/ppp/pptp.c
PRESSURE STALL INFORMATION (PSI)
M: Johannes Weiner <hannes@cmpxchg.org>
M: Suren Baghdasaryan <surenb@google.com>
@ -19523,6 +19401,14 @@ S: Maintained
F: Documentation/tools/rtla/
F: tools/tracing/rtla/
Real-time Linux (PREEMPT_RT)
M: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
M: Clark Williams <clrkwllms@kernel.org>
M: Steven Rostedt <rostedt@goodmis.org>
L: linux-rt-devel@lists.linux.dev
S: Supported
K: PREEMPT_RT
REALTEK AUDIO CODECS
M: Oder Chiou <oder_chiou@realtek.com>
S: Maintained
@ -19632,15 +19518,6 @@ S: Supported
F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml
F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET AVB DRIVER
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml
F: drivers/net/ethernet/renesas/Kconfig
F: drivers/net/ethernet/renesas/Makefile
F: drivers/net/ethernet/renesas/ravb*
RENESAS ETHERNET SWITCH DRIVER
R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
L: netdev@vger.kernel.org
@ -19690,14 +19567,6 @@ F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml
F: drivers/i2c/busses/i2c-rcar.c
F: drivers/i2c/busses/i2c-sh_mobile.c
RENESAS R-CAR SATA DRIVER
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: linux-ide@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml
F: drivers/ata/sata_rcar.c
RENESAS R-CAR THERMAL DRIVERS
M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
L: linux-renesas-soc@vger.kernel.org
@ -19773,16 +19642,6 @@ S: Supported
F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml
F: drivers/i2c/busses/i2c-rzv2m.c
RENESAS SUPERH ETHERNET DRIVER
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,ether.yaml
F: drivers/net/ethernet/renesas/Kconfig
F: drivers/net/ethernet/renesas/Makefile
F: drivers/net/ethernet/renesas/sh_eth*
F: include/linux/sh_eth.h
RENESAS USB PHY DRIVER
M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
L: linux-renesas-soc@vger.kernel.org
@ -21777,8 +21636,8 @@ F: drivers/accessibility/speakup/
SPEAR PLATFORM/CLOCK/PINCTRL SUPPORT
M: Viresh Kumar <vireshk@kernel.org>
M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>
M: soc@kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: soc@lists.linux.dev
S: Maintained
W: http://www.st.com/spear
F: arch/arm/boot/dts/st/spear*
@ -22436,19 +22295,11 @@ F: drivers/tty/serial/8250/8250_lpss.c
SYNOPSYS DESIGNWARE APB GPIO DRIVER
M: Hoan Tran <hoan@os.amperecomputing.com>
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-gpio@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/gpio/snps,dw-apb-gpio.yaml
F: drivers/gpio/gpio-dwapb.c
SYNOPSYS DESIGNWARE APB SSI DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-spi@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
F: drivers/spi/spi-dw*
SYNOPSYS DESIGNWARE AXI DMAC DRIVER
M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
S: Maintained
@ -23758,12 +23609,6 @@ L: linux-input@vger.kernel.org
S: Maintained
F: drivers/hid/hid-udraw-ps3.c
UFS FILESYSTEM
M: Evgeniy Dushistov <dushistov@mail.ru>
S: Maintained
F: Documentation/admin-guide/ufs.rst
F: fs/ufs/
UHID USERSPACE HID IO DRIVER
M: David Rheinsberg <david@readahead.eu>
L: linux-input@vger.kernel.org
@ -24066,6 +23911,7 @@ USB RAW GADGET DRIVER
R: Andrey Konovalov <andreyknvl@gmail.com>
L: linux-usb@vger.kernel.org
S: Maintained
B: https://github.com/xairy/raw-gadget/issues
F: Documentation/usb/raw-gadget.rst
F: drivers/usb/gadget/legacy/raw_gadget.c
F: include/uapi/linux/usb/raw_gadget.h
@ -24737,9 +24583,10 @@ F: tools/testing/vsock/
VMA
M: Andrew Morton <akpm@linux-foundation.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
S: Maintained
W: https://www.linux-mm.org

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 12
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -838,7 +838,7 @@ config CFI_CLANG
config CFI_ICALL_NORMALIZE_INTEGERS
bool "Normalize CFI tags for integers"
depends on CFI_CLANG
depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS
depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG
help
This option normalizes the CFI tags for integer types so that all
integer types of the same size and signedness receive the same CFI
@ -851,21 +851,19 @@ config CFI_ICALL_NORMALIZE_INTEGERS
This option is necessary for using CFI with Rust. If unsure, say N.
config HAVE_CFI_ICALL_NORMALIZE_INTEGERS
def_bool !GCOV_KERNEL && !KASAN
depends on CFI_CLANG
config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG
def_bool y
depends on $(cc-option,-fsanitize=kcfi -fsanitize-cfi-icall-experimental-normalize-integers)
help
Is CFI_ICALL_NORMALIZE_INTEGERS supported with the set of compilers
currently in use?
# With GCOV/KASAN we need this fix: https://github.com/llvm/llvm-project/pull/104826
depends on CLANG_VERSION >= 190000 || (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
This option defaults to false if GCOV or KASAN is enabled, as there is
an LLVM bug that makes normalized integers tags incompatible with
KASAN and GCOV. Kconfig currently does not have the infrastructure to
detect whether your rustc compiler contains the fix for this bug, so
it is assumed that it doesn't. If your compiler has the fix, you can
explicitly enable this option in your config file. The Kconfig logic
needed to detect this will be added in a future kernel release.
config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC
def_bool y
depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG
depends on RUSTC_VERSION >= 107900
# With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373
depends on (RUSTC_LLVM_VERSION >= 190000 && RUSTC_VERSION >= 108200) || \
(!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
config CFI_PERMISSIVE
bool "Use CFI in permissive mode"

View File

@ -77,7 +77,7 @@
};
&hdmi {
hpd-gpios = <&expgpio 1 GPIO_ACTIVE_LOW>;
hpd-gpios = <&expgpio 0 GPIO_ACTIVE_LOW>;
power-domains = <&power RPI_POWER_DOMAIN_HDMI>;
status = "okay";
};

View File

@ -136,7 +136,7 @@
};
cp0_mdio_pins: cp0-mdio-pins {
marvell,pins = "mpp40", "mpp41";
marvell,pins = "mpp0", "mpp1";
marvell,function = "ge";
};

View File

@ -178,6 +178,7 @@ struct kvm_nvhe_init_params {
unsigned long hcr_el2;
unsigned long vttbr;
unsigned long vtcr;
unsigned long tmp;
};
/*

View File

@ -51,6 +51,7 @@
#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)
#define KVM_REQ_SUSPEND KVM_ARCH_REQ(6)
#define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7)
#define KVM_REQ_NESTED_S2_UNMAP KVM_ARCH_REQ(8)
#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
KVM_DIRTY_LOG_INITIALLY_SET)
@ -211,6 +212,12 @@ struct kvm_s2_mmu {
*/
bool nested_stage2_enabled;
/*
* true when this MMU needs to be unmapped before being used for a new
* purpose.
*/
bool pending_unmap;
/*
* 0: Nobody is currently using this, check vttbr for validity
* >0: Somebody is actively using this.

View File

@ -166,7 +166,8 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr);
void __init free_hyp_pgds(void);
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
u64 size, bool may_block);
void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);

View File

@ -78,6 +78,8 @@ extern void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
extern void check_nested_vcpu_requests(struct kvm_vcpu *vcpu);
struct kvm_s2_trans {
phys_addr_t output;
unsigned long block_size;
@ -124,7 +126,7 @@ extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
struct kvm_s2_trans *trans);
extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
extern void kvm_nested_s2_wp(struct kvm *kvm);
extern void kvm_nested_s2_unmap(struct kvm *kvm);
extern void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block);
extern void kvm_nested_s2_flush(struct kvm *kvm);
unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val);

View File

@ -10,11 +10,9 @@
#include <asm/insn.h>
#include <asm/probes.h>
#define MAX_UINSN_BYTES AARCH64_INSN_SIZE
#define UPROBE_SWBP_INSN cpu_to_le32(BRK64_OPCODE_UPROBES)
#define UPROBE_SWBP_INSN_SIZE AARCH64_INSN_SIZE
#define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES
#define UPROBE_XOL_SLOT_BYTES AARCH64_INSN_SIZE
typedef __le32 uprobe_opcode_t;
@ -23,8 +21,8 @@ struct arch_uprobe_task {
struct arch_uprobe {
union {
u8 insn[MAX_UINSN_BYTES];
u8 ixol[MAX_UINSN_BYTES];
__le32 insn;
__le32 ixol;
};
struct arch_probe_insn api;
bool simulate;

View File

@ -146,6 +146,7 @@ int main(void)
DEFINE(NVHE_INIT_HCR_EL2, offsetof(struct kvm_nvhe_init_params, hcr_el2));
DEFINE(NVHE_INIT_VTTBR, offsetof(struct kvm_nvhe_init_params, vttbr));
DEFINE(NVHE_INIT_VTCR, offsetof(struct kvm_nvhe_init_params, vtcr));
DEFINE(NVHE_INIT_TMP, offsetof(struct kvm_nvhe_init_params, tmp));
#endif
#ifdef CONFIG_CPU_PM
DEFINE(CPU_CTX_SP, offsetof(struct cpu_suspend_ctx, sp));

View File

@ -99,10 +99,6 @@ arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *api)
aarch64_insn_is_blr(insn) ||
aarch64_insn_is_ret(insn)) {
api->handler = simulate_br_blr_ret;
} else if (aarch64_insn_is_ldr_lit(insn)) {
api->handler = simulate_ldr_literal;
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
api->handler = simulate_ldrsw_literal;
} else {
/*
* Instruction cannot be stepped out-of-line and we don't
@ -140,6 +136,17 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
probe_opcode_t insn = le32_to_cpu(*addr);
probe_opcode_t *scan_end = NULL;
unsigned long size = 0, offset = 0;
struct arch_probe_insn *api = &asi->api;
if (aarch64_insn_is_ldr_lit(insn)) {
api->handler = simulate_ldr_literal;
decoded = INSN_GOOD_NO_SLOT;
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
api->handler = simulate_ldrsw_literal;
decoded = INSN_GOOD_NO_SLOT;
} else {
decoded = arm_probe_decode_insn(insn, &asi->api);
}
/*
* If there's a symbol defined in front of and near enough to
@ -157,7 +164,6 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
else
scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
}
decoded = arm_probe_decode_insn(insn, &asi->api);
if (decoded != INSN_REJECTED && scan_end)
if (is_probed_address_atomic(addr - 1, scan_end))

View File

@ -171,17 +171,15 @@ simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs)
void __kprobes
simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
{
u64 *load_addr;
unsigned long load_addr;
int xn = opcode & 0x1f;
int disp;
disp = ldr_displacement(opcode);
load_addr = (u64 *) (addr + disp);
load_addr = addr + ldr_displacement(opcode);
if (opcode & (1 << 30)) /* x0-x30 */
set_x_reg(regs, xn, *load_addr);
set_x_reg(regs, xn, READ_ONCE(*(u64 *)load_addr));
else /* w0-w30 */
set_w_reg(regs, xn, *load_addr);
set_w_reg(regs, xn, READ_ONCE(*(u32 *)load_addr));
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}
@ -189,14 +187,12 @@ simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
void __kprobes
simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs)
{
s32 *load_addr;
unsigned long load_addr;
int xn = opcode & 0x1f;
int disp;
disp = ldr_displacement(opcode);
load_addr = (s32 *) (addr + disp);
load_addr = addr + ldr_displacement(opcode);
set_x_reg(regs, xn, *load_addr);
set_x_reg(regs, xn, READ_ONCE(*(s32 *)load_addr));
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}

View File

@ -42,7 +42,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
return -EINVAL;
insn = *(probe_opcode_t *)(&auprobe->insn[0]);
insn = le32_to_cpu(auprobe->insn);
switch (arm_probe_decode_insn(insn, &auprobe->api)) {
case INSN_REJECTED:
@ -108,7 +108,7 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
if (!auprobe->simulate)
return false;
insn = *(probe_opcode_t *)(&auprobe->insn[0]);
insn = le32_to_cpu(auprobe->insn);
addr = instruction_pointer(regs);
if (auprobe->api.handler)

View File

@ -412,6 +412,9 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
p->thread.cpu_context.x19 = (unsigned long)args->fn;
p->thread.cpu_context.x20 = (unsigned long)args->fn_arg;
if (system_supports_poe())
p->thread.por_el0 = POR_EL0_INIT;
}
p->thread.cpu_context.pc = (unsigned long)ret_from_fork;
p->thread.cpu_context.sp = (unsigned long)childregs;

View File

@ -997,6 +997,9 @@ static int kvm_vcpu_suspend(struct kvm_vcpu *vcpu)
static int check_vcpu_requests(struct kvm_vcpu *vcpu)
{
if (kvm_request_pending(vcpu)) {
if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu))
return -EIO;
if (kvm_check_request(KVM_REQ_SLEEP, vcpu))
kvm_vcpu_sleep(vcpu);
@ -1031,6 +1034,8 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
if (kvm_dirty_ring_check_request(vcpu))
return 0;
check_nested_vcpu_requests(vcpu);
}
return 1;

View File

@ -24,28 +24,25 @@
.align 11
SYM_CODE_START(__kvm_hyp_init)
ventry __invalid // Synchronous EL2t
ventry __invalid // IRQ EL2t
ventry __invalid // FIQ EL2t
ventry __invalid // Error EL2t
ventry . // Synchronous EL2t
ventry . // IRQ EL2t
ventry . // FIQ EL2t
ventry . // Error EL2t
ventry __invalid // Synchronous EL2h
ventry __invalid // IRQ EL2h
ventry __invalid // FIQ EL2h
ventry __invalid // Error EL2h
ventry . // Synchronous EL2h
ventry . // IRQ EL2h
ventry . // FIQ EL2h
ventry . // Error EL2h
ventry __do_hyp_init // Synchronous 64-bit EL1
ventry __invalid // IRQ 64-bit EL1
ventry __invalid // FIQ 64-bit EL1
ventry __invalid // Error 64-bit EL1
ventry . // IRQ 64-bit EL1
ventry . // FIQ 64-bit EL1
ventry . // Error 64-bit EL1
ventry __invalid // Synchronous 32-bit EL1
ventry __invalid // IRQ 32-bit EL1
ventry __invalid // FIQ 32-bit EL1
ventry __invalid // Error 32-bit EL1
__invalid:
b .
ventry . // Synchronous 32-bit EL1
ventry . // IRQ 32-bit EL1
ventry . // FIQ 32-bit EL1
ventry . // Error 32-bit EL1
/*
* Only uses x0..x3 so as to not clobber callee-saved SMCCC registers.
@ -76,6 +73,13 @@ __do_hyp_init:
eret
SYM_CODE_END(__kvm_hyp_init)
SYM_CODE_START_LOCAL(__kvm_init_el2_state)
/* Initialize EL2 CPU state to sane values. */
init_el2_state // Clobbers x0..x2
finalise_el2_state
ret
SYM_CODE_END(__kvm_init_el2_state)
/*
* Initialize the hypervisor in EL2.
*
@ -102,9 +106,12 @@ SYM_CODE_START_LOCAL(___kvm_hyp_init)
// TPIDR_EL2 is used to preserve x0 across the macro maze...
isb
msr tpidr_el2, x0
init_el2_state
finalise_el2_state
str lr, [x0, #NVHE_INIT_TMP]
bl __kvm_init_el2_state
mrs x0, tpidr_el2
ldr lr, [x0, #NVHE_INIT_TMP]
1:
ldr x1, [x0, #NVHE_INIT_TPIDR_EL2]
@ -199,9 +206,8 @@ SYM_CODE_START_LOCAL(__kvm_hyp_init_cpu)
2: msr SPsel, #1 // We want to use SP_EL{1,2}
/* Initialize EL2 CPU state to sane values. */
init_el2_state // Clobbers x0..x2
finalise_el2_state
bl __kvm_init_el2_state
__init_el2_nvhe_prepare_eret
/* Enable MMU, set vectors and stack. */

View File

@ -317,7 +317,7 @@ int kvm_smccc_call_handler(struct kvm_vcpu *vcpu)
* to the guest, and hide SSBS so that the
* guest stays protected.
*/
if (cpus_have_final_cap(ARM64_SSBS))
if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SSBS, IMP))
break;
fallthrough;
case SPECTRE_UNAFFECTED:
@ -428,7 +428,7 @@ int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
* Convert the workaround level into an easy-to-compare number, where higher
* values mean better protection.
*/
static int get_kernel_wa_level(u64 regid)
static int get_kernel_wa_level(struct kvm_vcpu *vcpu, u64 regid)
{
switch (regid) {
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
@ -449,7 +449,7 @@ static int get_kernel_wa_level(u64 regid)
* don't have any FW mitigation if SSBS is there at
* all times.
*/
if (cpus_have_final_cap(ARM64_SSBS))
if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SSBS, IMP))
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
fallthrough;
case SPECTRE_UNAFFECTED:
@ -486,7 +486,7 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
val = get_kernel_wa_level(vcpu, reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
break;
case KVM_REG_ARM_STD_BMAP:
val = READ_ONCE(smccc_feat->std_bmap);
@ -588,7 +588,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
return -EINVAL;
if (get_kernel_wa_level(reg->id) < val)
if (get_kernel_wa_level(vcpu, reg->id) < val)
return -EINVAL;
return 0;
@ -624,7 +624,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
* We can deal with NOT_AVAIL on NOT_REQUIRED, but not the
* other way around.
*/
if (get_kernel_wa_level(reg->id) < wa_level)
if (get_kernel_wa_level(vcpu, reg->id) < wa_level)
return -EINVAL;
return 0;

View File

@ -328,9 +328,10 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
may_block));
}
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
u64 size, bool may_block)
{
__unmap_stage2_range(mmu, start, size, true);
__unmap_stage2_range(mmu, start, size, may_block);
}
void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
@ -1015,7 +1016,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, vm_end - vm_start, true);
}
hva = vm_end;
} while (hva < reg_end);
@ -1042,7 +1043,7 @@ void stage2_unmap_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, bkt, slots)
stage2_unmap_memslot(kvm, memslot);
kvm_nested_s2_unmap(kvm);
kvm_nested_s2_unmap(kvm, true);
write_unlock(&kvm->mmu_lock);
mmap_read_unlock(current->mm);
@ -1912,7 +1913,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
kvm_nested_s2_unmap(kvm);
kvm_nested_s2_unmap(kvm, range->may_block);
return false;
}
@ -2179,8 +2180,8 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
phys_addr_t size = slot->npages << PAGE_SHIFT;
write_lock(&kvm->mmu_lock);
kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, size);
kvm_nested_s2_unmap(kvm);
kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, size, true);
kvm_nested_s2_unmap(kvm, true);
write_unlock(&kvm->mmu_lock);
}

View File

@ -632,9 +632,9 @@ static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
/* Set the scene for the next search */
kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;
/* Clear the old state */
/* Make sure we don't forget to do the laundry */
if (kvm_s2_mmu_valid(s2_mmu))
kvm_stage2_unmap_range(s2_mmu, 0, kvm_phys_size(s2_mmu));
s2_mmu->pending_unmap = true;
/*
* The virtual VMID (modulo CnP) will be used as a key when matching
@ -650,6 +650,16 @@ static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
out:
atomic_inc(&s2_mmu->refcnt);
/*
* Set the vCPU request to perform an unmap, even if the pending unmap
* originates from another vCPU. This guarantees that the MMU has been
* completely unmapped before any vCPU actually uses it, and allows
* multiple vCPUs to lend a hand with completing the unmap.
*/
if (s2_mmu->pending_unmap)
kvm_make_request(KVM_REQ_NESTED_S2_UNMAP, vcpu);
return s2_mmu;
}
@ -663,6 +673,13 @@ void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
{
/*
* The vCPU kept its reference on the MMU after the last put, keep
* rolling with it.
*/
if (vcpu->arch.hw_mmu)
return;
if (is_hyp_ctxt(vcpu)) {
vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
} else {
@ -674,10 +691,18 @@ void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
{
if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu)) {
/*
* Keep a reference on the associated stage-2 MMU if the vCPU is
* scheduling out and not in WFI emulation, suggesting it is likely to
* reuse the MMU sometime soon.
*/
if (vcpu->scheduled_out && !vcpu_get_flag(vcpu, IN_WFI))
return;
if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu))
atomic_dec(&vcpu->arch.hw_mmu->refcnt);
vcpu->arch.hw_mmu = NULL;
}
vcpu->arch.hw_mmu = NULL;
}
/*
@ -730,7 +755,7 @@ void kvm_nested_s2_wp(struct kvm *kvm)
}
}
void kvm_nested_s2_unmap(struct kvm *kvm)
void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block)
{
int i;
@ -740,7 +765,7 @@ void kvm_nested_s2_unmap(struct kvm *kvm)
struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
if (kvm_s2_mmu_valid(mmu))
kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu));
kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), may_block);
}
}
@ -1184,3 +1209,17 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
return 0;
}
void check_nested_vcpu_requests(struct kvm_vcpu *vcpu)
{
if (kvm_check_request(KVM_REQ_NESTED_S2_UNMAP, vcpu)) {
struct kvm_s2_mmu *mmu = vcpu->arch.hw_mmu;
write_lock(&vcpu->kvm->mmu_lock);
if (mmu->pending_unmap) {
kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), true);
mmu->pending_unmap = false;
}
write_unlock(&vcpu->kvm->mmu_lock);
}
}

View File

@ -1527,6 +1527,14 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_DF2);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
break;
case SYS_ID_AA64PFR2_EL1:
/* We only expose FPMR */
@ -1550,7 +1558,8 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
break;
case SYS_ID_AA64MMFR3_EL1:
val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE;
val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE |
ID_AA64MMFR3_EL1_S1PIE;
break;
case SYS_ID_MMFR4_EL1:
val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);
@ -1985,7 +1994,7 @@ static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
* one cache line.
*/
if (kvm_has_mte(vcpu->kvm))
clidr |= 2 << CLIDR_TTYPE_SHIFT(loc);
clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc);
__vcpu_sys_reg(vcpu, r->reg) = clidr;
@ -2376,7 +2385,19 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_AA64PFR0_EL1_RAS |
ID_AA64PFR0_EL1_AdvSIMD |
ID_AA64PFR0_EL1_FP), },
ID_SANITISED(ID_AA64PFR1_EL1),
ID_WRITABLE(ID_AA64PFR1_EL1, ~(ID_AA64PFR1_EL1_PFAR |
ID_AA64PFR1_EL1_DF2 |
ID_AA64PFR1_EL1_MTEX |
ID_AA64PFR1_EL1_THE |
ID_AA64PFR1_EL1_GCS |
ID_AA64PFR1_EL1_MTE_frac |
ID_AA64PFR1_EL1_NMI |
ID_AA64PFR1_EL1_RNDR_trap |
ID_AA64PFR1_EL1_SME |
ID_AA64PFR1_EL1_RES0 |
ID_AA64PFR1_EL1_MPAM_frac |
ID_AA64PFR1_EL1_RAS_frac |
ID_AA64PFR1_EL1_MTE)),
ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),
ID_UNALLOCATED(4,3),
ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),
@ -2390,7 +2411,21 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.get_user = get_id_reg,
.set_user = set_id_aa64dfr0_el1,
.reset = read_sanitised_id_aa64dfr0_el1,
.val = ID_AA64DFR0_EL1_PMUVer_MASK |
/*
* Prior to FEAT_Debugv8.9, the architecture defines context-aware
* breakpoints (CTX_CMPs) as the highest numbered breakpoints (BRPs).
* KVM does not trap + emulate the breakpoint registers, and as such
* cannot support a layout that misaligns with the underlying hardware.
* While it may be possible to describe a subset that aligns with
* hardware, just prevent changes to BRPs and CTX_CMPs altogether for
* simplicity.
*
* See DDI0487K.a, section D2.8.3 Breakpoint types and linking
* of breakpoints for more details.
*/
.val = ID_AA64DFR0_EL1_DoubleLock_MASK |
ID_AA64DFR0_EL1_WRPs_MASK |
ID_AA64DFR0_EL1_PMUVer_MASK |
ID_AA64DFR0_EL1_DebugVer_MASK, },
ID_SANITISED(ID_AA64DFR1_EL1),
ID_UNALLOCATED(5,2),
@ -2433,6 +2468,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_AA64MMFR2_EL1_NV |
ID_AA64MMFR2_EL1_CCIDX)),
ID_WRITABLE(ID_AA64MMFR3_EL1, (ID_AA64MMFR3_EL1_TCRX |
ID_AA64MMFR3_EL1_S1PIE |
ID_AA64MMFR3_EL1_S1POE)),
ID_SANITISED(ID_AA64MMFR4_EL1),
ID_UNALLOCATED(7,5),
@ -2903,7 +2939,7 @@ static bool handle_alle1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
* Drop all shadow S2s, resulting in S1/S2 TLBIs for each of the
* corresponding VMIDs.
*/
kvm_nested_s2_unmap(vcpu->kvm);
kvm_nested_s2_unmap(vcpu->kvm, true);
write_unlock(&vcpu->kvm->mmu_lock);
@ -2955,7 +2991,30 @@ union tlbi_info {
static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu,
const union tlbi_info *info)
{
kvm_stage2_unmap_range(mmu, info->range.start, info->range.size);
/*
* The unmap operation is allowed to drop the MMU lock and block, which
* means that @mmu could be used for a different context than the one
* currently being invalidated.
*
* This behavior is still safe, as:
*
* 1) The vCPU(s) that recycled the MMU are responsible for invalidating
* the entire MMU before reusing it, which still honors the intent
* of a TLBI.
*
* 2) Until the guest TLBI instruction is 'retired' (i.e. increment PC
* and ERET to the guest), other vCPUs are allowed to use stale
* translations.
*
* 3) Accidentally unmapping an unrelated MMU context is nonfatal, and
* at worst may cause more aborts for shadow stage-2 fills.
*
* Dropping the MMU lock also implies that shadow stage-2 fills could
* happen behind the back of the TLBI. This is still safe, though, as
* the L1 needs to put its stage-2 in a consistent state before doing
* the TLBI.
*/
kvm_stage2_unmap_range(mmu, info->range.start, info->range.size, true);
}
static bool handle_vmalls12e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
@ -3050,7 +3109,11 @@ static void s2_mmu_unmap_ipa(struct kvm_s2_mmu *mmu,
max_size = compute_tlb_inval_range(mmu, info->ipa.addr);
base_addr &= ~(max_size - 1);
kvm_stage2_unmap_range(mmu, base_addr, max_size);
/*
* See comment in s2_mmu_unmap_range() for why this is allowed to
* reschedule.
*/
kvm_stage2_unmap_range(mmu, base_addr, max_size, true);
}
static bool handle_ipas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,

View File

@ -417,8 +417,28 @@ static void __kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
kfree(vgic_cpu->private_irqs);
vgic_cpu->private_irqs = NULL;
if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)
if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
/*
* If this vCPU is being destroyed because of a failed creation
* then unregister the redistributor to avoid leaving behind a
* dangling pointer to the vCPU struct.
*
* vCPUs that have been successfully created (i.e. added to
* kvm->vcpu_array) get unregistered in kvm_vgic_destroy(), as
* this function gets called while holding kvm->arch.config_lock
* in the VM teardown path and would otherwise introduce a lock
* inversion w.r.t. kvm->srcu.
*
* vCPUs that failed creation are torn down outside of the
* kvm->arch.config_lock and do not get unregistered in
* kvm_vgic_destroy(), meaning it is both safe and necessary to
* do so here.
*/
if (kvm_get_vcpu_by_id(vcpu->kvm, vcpu->vcpu_id) != vcpu)
vgic_unregister_redist_iodev(vcpu);
vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF;
}
}
void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
@ -524,22 +544,31 @@ int kvm_vgic_map_resources(struct kvm *kvm)
if (ret)
goto out;
dist->ready = true;
dist_base = dist->vgic_dist_base;
mutex_unlock(&kvm->arch.config_lock);
ret = vgic_register_dist_iodev(kvm, dist_base, type);
if (ret)
if (ret) {
kvm_err("Unable to register VGIC dist MMIO regions\n");
goto out_slots;
}
/*
* kvm_io_bus_register_dev() guarantees all readers see the new MMIO
* registration before returning through synchronize_srcu(), which also
* implies a full memory barrier. As such, marking the distributor as
* 'ready' here is guaranteed to be ordered after all vCPUs having seen
* a completely configured distributor.
*/
dist->ready = true;
goto out_slots;
out:
mutex_unlock(&kvm->arch.config_lock);
out_slots:
mutex_unlock(&kvm->slots_lock);
if (ret)
kvm_vgic_destroy(kvm);
kvm_vm_dead(kvm);
mutex_unlock(&kvm->slots_lock);
return ret;
}

View File

@ -236,7 +236,12 @@ static int vgic_set_common_attr(struct kvm_device *dev,
mutex_lock(&dev->kvm->arch.config_lock);
if (vgic_ready(dev->kvm) || dev->kvm->arch.vgic.nr_spis)
/*
* Either userspace has already configured NR_IRQS or
* the vgic has already been initialized and vgic_init()
* supplied a default amount of SPIs.
*/
if (dev->kvm->arch.vgic.nr_spis)
ret = -EBUSY;
else
dev->kvm->arch.vgic.nr_spis =

View File

@ -26,6 +26,10 @@ struct loongson_board_info {
#define NR_WORDS DIV_ROUND_UP(NR_CPUS, BITS_PER_LONG)
/*
* The "core" of cores_per_node and cores_per_package stands for a
* logical core, which means in a SMT system it stands for a thread.
*/
struct loongson_system_configuration {
int nr_cpus;
int nr_nodes;

View File

@ -16,7 +16,7 @@
#define XRANGE_SHIFT (48)
/* Valid address length */
#define XRANGE_SHADOW_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3)
#define XRANGE_SHADOW_SHIFT min(cpu_vabits, VA_BITS)
/* Used for taking out the valid address */
#define XRANGE_SHADOW_MASK GENMASK_ULL(XRANGE_SHADOW_SHIFT - 1, 0)
/* One segment whole address space size */

View File

@ -250,7 +250,7 @@
#define CSR_ESTAT_IS_WIDTH 15
#define CSR_ESTAT_IS (_ULCAST_(0x7fff) << CSR_ESTAT_IS_SHIFT)
#define LOONGARCH_CSR_ERA 0x6 /* ERA */
#define LOONGARCH_CSR_ERA 0x6 /* Exception return address */
#define LOONGARCH_CSR_BADV 0x7 /* Bad virtual address */

View File

@ -10,6 +10,7 @@
#define __HAVE_ARCH_PMD_ALLOC_ONE
#define __HAVE_ARCH_PUD_ALLOC_ONE
#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
#include <asm-generic/pgalloc.h>
static inline void pmd_populate_kernel(struct mm_struct *mm,
@ -44,6 +45,16 @@ extern void pagetable_init(void);
extern pgd_t *pgd_alloc(struct mm_struct *mm);
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte = __pte_alloc_one_kernel(mm);
if (pte)
kernel_pte_init(pte);
return pte;
}
#define __pte_free_tlb(tlb, pte, address) \
do { \
pagetable_pte_dtor(page_ptdesc(pte)); \

View File

@ -269,6 +269,7 @@ extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pm
extern void pgd_init(void *addr);
extern void pud_init(void *addr);
extern void pmd_init(void *addr);
extern void kernel_pte_init(void *addr);
/*
* Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that
@ -325,39 +326,17 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
{
WRITE_ONCE(*ptep, pteval);
if (pte_val(pteval) & _PAGE_GLOBAL) {
pte_t *buddy = ptep_buddy(ptep);
/*
* Make sure the buddy is global too (if it's !none,
* it better already be global)
*/
if (pte_none(ptep_get(buddy))) {
#ifdef CONFIG_SMP
/*
* For SMP, multiple CPUs can race, so we need
* to do this atomically.
*/
__asm__ __volatile__(
__AMOR "$zero, %[global], %[buddy] \n"
: [buddy] "+ZB" (buddy->pte)
: [global] "r" (_PAGE_GLOBAL)
: "memory");
DBAR(0b11000); /* o_wrw = 0b11000 */
#else /* !CONFIG_SMP */
WRITE_ONCE(*buddy, __pte(pte_val(ptep_get(buddy)) | _PAGE_GLOBAL));
#endif /* CONFIG_SMP */
}
}
if (pte_val(pteval) & _PAGE_GLOBAL)
DBAR(0b11000); /* o_wrw = 0b11000 */
#endif
}
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
/* Preserve global status for the pair */
if (pte_val(ptep_get(ptep_buddy(ptep))) & _PAGE_GLOBAL)
set_pte(ptep, __pte(_PAGE_GLOBAL));
else
set_pte(ptep, __pte(0));
pte_t pte = ptep_get(ptep);
pte_val(pte) &= _PAGE_GLOBAL;
set_pte(ptep, pte);
}
#define PGD_T_LOG2 (__builtin_ffs(sizeof(pgd_t)) - 1)

View File

@ -293,13 +293,15 @@ unsigned long stack_top(void)
{
unsigned long top = TASK_SIZE & PAGE_MASK;
/* Space for the VDSO & data page */
top -= PAGE_ALIGN(current->thread.vdso->size);
top -= VVAR_SIZE;
if (current->thread.vdso) {
/* Space for the VDSO & data page */
top -= PAGE_ALIGN(current->thread.vdso->size);
top -= VVAR_SIZE;
/* Space to randomize the VDSO base */
if (current->flags & PF_RANDOMIZE)
top -= VDSO_RANDOMIZE_SIZE;
/* Space to randomize the VDSO base */
if (current->flags & PF_RANDOMIZE)
top -= VDSO_RANDOMIZE_SIZE;
}
return top;
}

View File

@ -55,6 +55,7 @@
#define SMBIOS_FREQHIGH_OFFSET 0x17
#define SMBIOS_FREQLOW_MASK 0xFF
#define SMBIOS_CORE_PACKAGE_OFFSET 0x23
#define SMBIOS_THREAD_PACKAGE_OFFSET 0x25
#define LOONGSON_EFI_ENABLE (1 << 3)
unsigned long fw_arg0, fw_arg1, fw_arg2;
@ -125,7 +126,7 @@ static void __init parse_cpu_table(const struct dmi_header *dm)
cpu_clock_freq = freq_temp * 1000000;
loongson_sysconf.cpuname = (void *)dmi_string_parse(dm, dmi_data[16]);
loongson_sysconf.cores_per_package = *(dmi_data + SMBIOS_CORE_PACKAGE_OFFSET);
loongson_sysconf.cores_per_package = *(dmi_data + SMBIOS_THREAD_PACKAGE_OFFSET);
pr_info("CpuClock = %llu\n", cpu_clock_freq);
}

View File

@ -555,6 +555,9 @@ asmlinkage void noinstr do_ale(struct pt_regs *regs)
#else
unsigned int *pc;
if (regs->csr_prmd & CSR_PRMD_PIE)
local_irq_enable();
perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);
/*
@ -579,6 +582,8 @@ sigbus:
die_if_kernel("Kernel ale access", regs);
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);
out:
if (regs->csr_prmd & CSR_PRMD_PIE)
local_irq_disable();
#endif
irqentry_exit(regs, state);
}

View File

@ -34,7 +34,6 @@ static union {
struct loongarch_vdso_data vdata;
} loongarch_vdso_data __page_aligned_data;
static struct page *vdso_pages[] = { NULL };
struct vdso_data *vdso_data = generic_vdso_data.data;
struct vdso_pcpu_data *vdso_pdata = loongarch_vdso_data.vdata.pdata;
struct vdso_rng_data *vdso_rng_data = &loongarch_vdso_data.vdata.rng_data;
@ -85,10 +84,8 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
struct loongarch_vdso_info vdso_info = {
.vdso = vdso_start,
.size = PAGE_SIZE,
.code_mapping = {
.name = "[vdso]",
.pages = vdso_pages,
.mremap = vdso_mremap,
},
.data_mapping = {
@ -103,11 +100,14 @@ static int __init init_vdso(void)
unsigned long i, cpu, pfn;
BUG_ON(!PAGE_ALIGNED(vdso_info.vdso));
BUG_ON(!PAGE_ALIGNED(vdso_info.size));
for_each_possible_cpu(cpu)
vdso_pdata[cpu].node = cpu_to_node(cpu);
vdso_info.size = PAGE_ALIGN(vdso_end - vdso_start);
vdso_info.code_mapping.pages =
kcalloc(vdso_info.size / PAGE_SIZE, sizeof(struct page *), GFP_KERNEL);
pfn = __phys_to_pfn(__pa_symbol(vdso_info.vdso));
for (i = 0; i < vdso_info.size / PAGE_SIZE; i++)
vdso_info.code_mapping.pages[i] = pfn_to_page(pfn + i);

View File

@ -161,10 +161,11 @@ static void _kvm_save_timer(struct kvm_vcpu *vcpu)
if (kvm_vcpu_is_blocking(vcpu)) {
/*
* HRTIMER_MODE_PINNED is suggested since vcpu may run in
* the same physical cpu in next time
* HRTIMER_MODE_PINNED_HARD is suggested since vcpu may run in
* the same physical cpu in next time, and the timer should run
* in hardirq context even in the PREEMPT_RT case.
*/
hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED);
hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED_HARD);
}
}

View File

@ -1457,7 +1457,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
vcpu->arch.vpid = 0;
vcpu->arch.flush_gpa = INVALID_GPA;
hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
vcpu->arch.swtimer.function = kvm_swtimer_wakeup;
vcpu->arch.handle_exit = kvm_handle_exit;

View File

@ -201,7 +201,9 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
if (!pte)
panic("%s: Failed to allocate memory\n", __func__);
pmd_populate_kernel(&init_mm, pmd, pte);
kernel_pte_init(pte);
}
return pte_offset_kernel(pmd, addr);

View File

@ -116,6 +116,26 @@ void pud_init(void *addr)
EXPORT_SYMBOL_GPL(pud_init);
#endif
void kernel_pte_init(void *addr)
{
unsigned long *p, *end;
p = (unsigned long *)addr;
end = p + PTRS_PER_PTE;
do {
p[0] = _PAGE_GLOBAL;
p[1] = _PAGE_GLOBAL;
p[2] = _PAGE_GLOBAL;
p[3] = _PAGE_GLOBAL;
p[4] = _PAGE_GLOBAL;
p += 8;
p[-3] = _PAGE_GLOBAL;
p[-2] = _PAGE_GLOBAL;
p[-1] = _PAGE_GLOBAL;
} while (p != end);
}
pmd_t mk_pmd(struct page *page, pgprot_t prot)
{
pmd_t pmd;

View File

@ -282,6 +282,7 @@ int __init opal_event_init(void)
name, NULL);
if (rc) {
pr_warn("Error %d requesting OPAL irq %d\n", rc, (int)r->start);
kfree(name);
continue;
}
}

View File

@ -55,7 +55,7 @@ struct imsic {
/* IMSIC SW-file */
struct imsic_mrif *swfile;
phys_addr_t swfile_pa;
spinlock_t swfile_extirq_lock;
raw_spinlock_t swfile_extirq_lock;
};
#define imsic_vs_csr_read(__c) \
@ -622,7 +622,7 @@ static void imsic_swfile_extirq_update(struct kvm_vcpu *vcpu)
* interruptions between reading topei and updating pending status.
*/
spin_lock_irqsave(&imsic->swfile_extirq_lock, flags);
raw_spin_lock_irqsave(&imsic->swfile_extirq_lock, flags);
if (imsic_mrif_atomic_read(mrif, &mrif->eidelivery) &&
imsic_mrif_topei(mrif, imsic->nr_eix, imsic->nr_msis))
@ -630,7 +630,7 @@ static void imsic_swfile_extirq_update(struct kvm_vcpu *vcpu)
else
kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_EXT);
spin_unlock_irqrestore(&imsic->swfile_extirq_lock, flags);
raw_spin_unlock_irqrestore(&imsic->swfile_extirq_lock, flags);
}
static void imsic_swfile_read(struct kvm_vcpu *vcpu, bool clear,
@ -1051,7 +1051,7 @@ int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu)
}
imsic->swfile = page_to_virt(swfile_page);
imsic->swfile_pa = page_to_phys(swfile_page);
spin_lock_init(&imsic->swfile_extirq_lock);
raw_spin_lock_init(&imsic->swfile_extirq_lock);
/* Setup IO device */
kvm_iodevice_init(&imsic->iodev, &imsic_iodoev_ops);

View File

@ -18,6 +18,7 @@
#define RV_MAX_REG_ARGS 8
#define RV_FENTRY_NINSNS 2
#define RV_FENTRY_NBYTES (RV_FENTRY_NINSNS * 4)
#define RV_KCFI_NINSNS (IS_ENABLED(CONFIG_CFI_CLANG) ? 1 : 0)
/* imm that allows emit_imm to emit max count insns */
#define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF
@ -271,7 +272,8 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
if (!is_tail_call)
emit_addiw(RV_REG_A0, RV_REG_A5, 0, ctx);
emit_jalr(RV_REG_ZERO, is_tail_call ? RV_REG_T3 : RV_REG_RA,
is_tail_call ? (RV_FENTRY_NINSNS + 1) * 4 : 0, /* skip reserved nops and TCC init */
/* kcfi, fentry and TCC init insns will be skipped on tailcall */
is_tail_call ? (RV_KCFI_NINSNS + RV_FENTRY_NINSNS + 1) * 4 : 0,
ctx);
}
@ -548,8 +550,8 @@ static void emit_atomic(u8 rd, u8 rs, s16 off, s32 imm, bool is64,
rv_lr_w(r0, 0, rd, 0, 0), ctx);
jmp_offset = ninsns_rvoff(8);
emit(rv_bne(RV_REG_T2, r0, jmp_offset >> 1), ctx);
emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 0) :
rv_sc_w(RV_REG_T3, rs, rd, 0, 0), ctx);
emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 1) :
rv_sc_w(RV_REG_T3, rs, rd, 0, 1), ctx);
jmp_offset = ninsns_rvoff(-6);
emit(rv_bne(RV_REG_T3, 0, jmp_offset >> 1), ctx);
emit(rv_fence(0x3, 0x3), ctx);

View File

@ -50,7 +50,6 @@ CONFIG_NUMA=y
CONFIG_HZ_100=y
CONFIG_CERT_STORE=y
CONFIG_EXPOLINE=y
# CONFIG_EXPOLINE_EXTERN is not set
CONFIG_EXPOLINE_AUTO=y
CONFIG_CHSC_SCH=y
CONFIG_VFIO_CCW=m
@ -95,6 +94,7 @@ CONFIG_BINFMT_MISC=m
CONFIG_ZSWAP=y
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_SLAB_BUCKETS=y
CONFIG_SLUB_STATS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_MEMORY_HOTPLUG=y
@ -426,6 +426,13 @@ CONFIG_DEVTMPFS_SAFE=y
# CONFIG_FW_LOADER is not set
CONFIG_CONNECTOR=y
CONFIG_ZRAM=y
CONFIG_ZRAM_BACKEND_LZ4=y
CONFIG_ZRAM_BACKEND_LZ4HC=y
CONFIG_ZRAM_BACKEND_ZSTD=y
CONFIG_ZRAM_BACKEND_DEFLATE=y
CONFIG_ZRAM_BACKEND_842=y
CONFIG_ZRAM_BACKEND_LZO=y
CONFIG_ZRAM_DEF_COMP_DEFLATE=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
@ -486,6 +493,7 @@ CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y
CONFIG_DM_SWITCH=m
CONFIG_DM_INTEGRITY=m
CONFIG_DM_VDO=m
@ -535,6 +543,7 @@ CONFIG_NLMON=m
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_META is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
@ -695,6 +704,7 @@ CONFIG_NFSD=m
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_CIFS=m
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
@ -740,7 +750,6 @@ CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECDSA=m
CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_SM2=m
CONFIG_CRYPTO_CURVE25519=m
CONFIG_CRYPTO_AES_TI=m
CONFIG_CRYPTO_ANUBIS=m

View File

@ -48,7 +48,6 @@ CONFIG_NUMA=y
CONFIG_HZ_100=y
CONFIG_CERT_STORE=y
CONFIG_EXPOLINE=y
# CONFIG_EXPOLINE_EXTERN is not set
CONFIG_EXPOLINE_AUTO=y
CONFIG_CHSC_SCH=y
CONFIG_VFIO_CCW=m
@ -89,6 +88,7 @@ CONFIG_BINFMT_MISC=m
CONFIG_ZSWAP=y
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_SLAB_BUCKETS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y
@ -416,6 +416,13 @@ CONFIG_DEVTMPFS_SAFE=y
# CONFIG_FW_LOADER is not set
CONFIG_CONNECTOR=y
CONFIG_ZRAM=y
CONFIG_ZRAM_BACKEND_LZ4=y
CONFIG_ZRAM_BACKEND_LZ4HC=y
CONFIG_ZRAM_BACKEND_ZSTD=y
CONFIG_ZRAM_BACKEND_DEFLATE=y
CONFIG_ZRAM_BACKEND_842=y
CONFIG_ZRAM_BACKEND_LZO=y
CONFIG_ZRAM_DEF_COMP_DEFLATE=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
@ -476,6 +483,7 @@ CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y
CONFIG_DM_SWITCH=m
CONFIG_DM_INTEGRITY=m
CONFIG_DM_VDO=m
@ -525,6 +533,7 @@ CONFIG_NLMON=m
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_META is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
@ -682,6 +691,7 @@ CONFIG_NFSD=m
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_CIFS=m
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
@ -726,7 +736,6 @@ CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECDSA=m
CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_SM2=m
CONFIG_CRYPTO_CURVE25519=m
CONFIG_CRYPTO_AES_TI=m
CONFIG_CRYPTO_ANUBIS=m
@ -767,6 +776,7 @@ CONFIG_CRYPTO_LZ4=m
CONFIG_CRYPTO_LZ4HC=m
CONFIG_CRYPTO_ZSTD=m
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_JITTERENTROPY_OSR=1
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m

View File

@ -49,6 +49,7 @@ CONFIG_ZFCP=y
# CONFIG_HVC_IUCV is not set
# CONFIG_HW_RANDOM_S390 is not set
# CONFIG_HMC_DRV is not set
# CONFIG_S390_UV_UAPI is not set
# CONFIG_S390_TAPE is not set
# CONFIG_VMCP is not set
# CONFIG_MONWRITER is not set

View File

@ -49,6 +49,7 @@ struct perf_sf_sde_regs {
};
#define perf_arch_fetch_caller_regs(regs, __ip) do { \
(regs)->psw.mask = 0; \
(regs)->psw.addr = (__ip); \
(regs)->gprs[15] = (unsigned long)__builtin_frame_address(0) - \
offsetof(struct stack_frame, back_chain); \

View File

@ -77,7 +77,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
vcpu->stat.instruction_diagnose_258++;
if (vcpu->run->s.regs.gprs[rx] & 7)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm));
rc = read_guest_real(vcpu, vcpu->run->s.regs.gprs[rx], &parm, sizeof(parm));
if (rc)
return kvm_s390_inject_prog_cond(vcpu, rc);
if (parm.parm_version != 2 || parm.parm_len < 5 || parm.code != 0x258)

View File

@ -828,6 +828,8 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
const gfn_t gfn = gpa_to_gfn(gpa);
int rc;
if (!gfn_to_memslot(kvm, gfn))
return PGM_ADDRESSING;
if (mode == GACC_STORE)
rc = kvm_write_guest_page(kvm, gfn, data, offset, len);
else
@ -985,6 +987,8 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
gra += fragment_len;
data += fragment_len;
}
if (rc > 0)
vcpu->arch.pgm.code = rc;
return rc;
}

View File

@ -405,11 +405,12 @@ int read_guest_abs(struct kvm_vcpu *vcpu, unsigned long gpa, void *data,
* @len: number of bytes to copy
*
* Copy @len bytes from @data (kernel space) to @gra (guest real address).
* It is up to the caller to ensure that the entire guest memory range is
* valid memory before calling this function.
* Guest low address and key protection are not checked.
*
* Returns zero on success or -EFAULT on error.
* Returns zero on success, -EFAULT when copying from @data failed, or
* PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info
* is also stored to allow injecting into the guest (if applicable) using
* kvm_s390_inject_prog_cond().
*
* If an error occurs data may have been copied partially to guest memory.
*/
@ -428,11 +429,12 @@ int write_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
* @len: number of bytes to copy
*
* Copy @len bytes from @gra (guest real address) to @data (kernel space).
* It is up to the caller to ensure that the entire guest memory range is
* valid memory before calling this function.
* Guest key protection is not checked.
*
* Returns zero on success or -EFAULT on error.
* Returns zero on success, -EFAULT when copying to @data failed, or
* PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info
* is also stored to allow injecting into the guest (if applicable) using
* kvm_s390_inject_prog_cond().
*
* If an error occurs data may have been copied partially to kernel space.
*/

View File

@ -280,18 +280,19 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
goto no_pdev;
switch (ccdf->pec) {
case 0x003a: /* Service Action or Error Recovery Successful */
case 0x002a: /* Error event concerns FMB */
case 0x002b:
case 0x002c:
break;
case 0x0040: /* Service Action or Error Recovery Failed */
case 0x003b:
zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
break;
default: /* PCI function left in the error state attempt to recover */
ers_res = zpci_event_attempt_error_recovery(pdev);
if (ers_res != PCI_ERS_RESULT_RECOVERED)
zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
break;
default:
/*
* Mark as frozen not permanently failed because the device
* could be subsequently recovered by the platform.
*/
zpci_event_io_failure(pdev, pci_channel_io_frozen);
break;
}
pci_dev_put(pdev);
no_pdev:

View File

@ -9,6 +9,8 @@
#include <asm/unwind_hints.h>
#include <asm/segment.h>
#include <asm/cache.h>
#include <asm/cpufeatures.h>
#include <asm/nospec-branch.h>
#include "calling.h"
@ -19,6 +21,9 @@ SYM_FUNC_START(entry_ibpb)
movl $PRED_CMD_IBPB, %eax
xorl %edx, %edx
wrmsr
/* Make sure IBPB clears return stack preductions too. */
FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET
RET
SYM_FUNC_END(entry_ibpb)
/* For KVM */

View File

@ -871,6 +871,8 @@ SYM_FUNC_START(entry_SYSENTER_32)
/* Now ready to switch the cr3 */
SWITCH_TO_USER_CR3 scratch_reg=%eax
/* Clobbers ZF */
CLEAR_CPU_BUFFERS
/*
* Restore all flags except IF. (We restore IF separately because
@ -881,7 +883,6 @@ SYM_FUNC_START(entry_SYSENTER_32)
BUG_IF_WRONG_CR3 no_user_check=1
popfl
popl %eax
CLEAR_CPU_BUFFERS
/*
* Return back to the vDSO, which will pop ecx and edx.
@ -1144,7 +1145,6 @@ SYM_CODE_START(asm_exc_nmi)
/* Not on SYSENTER stack. */
call exc_nmi
CLEAR_CPU_BUFFERS
jmp .Lnmi_return
.Lnmi_from_sysenter_stack:
@ -1165,6 +1165,7 @@ SYM_CODE_START(asm_exc_nmi)
CHECK_AND_APPLY_ESPFIX
RESTORE_ALL_NMI cr3_reg=%edi pop=4
CLEAR_CPU_BUFFERS
jmp .Lirq_return
#ifdef CONFIG_X86_ESPFIX32
@ -1206,6 +1207,7 @@ SYM_CODE_START(asm_exc_nmi)
* 1 - orig_ax
*/
lss (1+5+6)*4(%esp), %esp # back to espfix stack
CLEAR_CPU_BUFFERS
jmp .Lirq_return
#endif
SYM_CODE_END(asm_exc_nmi)

View File

@ -215,7 +215,7 @@
#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */
#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */
#define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */
#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier */
#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier without a guaranteed RSB flush */
#define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */
#define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */
#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */
@ -348,6 +348,7 @@
#define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */
#define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */
#define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */
#define X86_FEATURE_AMD_IBPB_RET (13*32+30) /* IBPB clears return address predictor */
#define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */
/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
@ -523,4 +524,5 @@
#define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* "div0" AMD DIV0 speculation bug */
#define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
#define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
#endif /* _ASM_X86_CPUFEATURES_H */

View File

@ -323,7 +323,16 @@
* Note: Only the memory operand variant of VERW clears the CPU buffers.
*/
.macro CLEAR_CPU_BUFFERS
ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
#ifdef CONFIG_X86_64
ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
#else
/*
* In 32bit mode, the memory operand must be a %cs reference. The data
* segments may not be usable (vm86 mode), and the stack segment may not
* be flat (ESPFIX32).
*/
ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
#endif
.endm
#ifdef CONFIG_X86_64

View File

@ -44,6 +44,7 @@
#define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4
#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc
#define PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4 0x12c4
#define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4 0x16fc
#define PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4 0x124c
#define PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4 0x12bc
#define PCI_DEVICE_ID_AMD_MI200_DF_F4 0x14d4
@ -127,6 +128,7 @@ static const struct pci_device_id amd_nb_link_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI200_DF_F4) },

View File

@ -440,7 +440,19 @@ static int lapic_timer_shutdown(struct clock_event_device *evt)
v = apic_read(APIC_LVTT);
v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
apic_write(APIC_LVTT, v);
apic_write(APIC_TMICT, 0);
/*
* Setting APIC_LVT_MASKED (above) should be enough to tell
* the hardware that this timer will never fire. But AMD
* erratum 411 and some Intel CPU behavior circa 2024 say
* otherwise. Time for belt and suspenders programming: mask
* the timer _and_ zero the counter registers:
*/
if (v & APIC_LVT_TIMER_TSCDEADLINE)
wrmsrl(MSR_IA32_TSC_DEADLINE, 0);
else
apic_write(APIC_TMICT, 0);
return 0;
}

View File

@ -1202,5 +1202,6 @@ void amd_check_microcode(void)
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
return;
on_each_cpu(zenbleed_check_cpu, NULL, 1);
if (cpu_feature_enabled(X86_FEATURE_ZEN2))
on_each_cpu(zenbleed_check_cpu, NULL, 1);
}

View File

@ -1115,8 +1115,25 @@ do_cmd_auto:
case RETBLEED_MITIGATION_IBPB:
setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
/*
* IBPB on entry already obviates the need for
* software-based untraining so clear those in case some
* other mitigation like SRSO has selected them.
*/
setup_clear_cpu_cap(X86_FEATURE_UNRET);
setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
mitigate_smt = true;
/*
* There is no need for RSB filling: entry_ibpb() ensures
* all predictions, including the RSB, are invalidated,
* regardless of IBPB implementation.
*/
setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
break;
case RETBLEED_MITIGATION_STUFF:
@ -2627,6 +2644,14 @@ static void __init srso_select_mitigation(void)
if (has_microcode) {
setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
srso_mitigation = SRSO_MITIGATION_IBPB;
/*
* IBPB on entry already obviates the need for
* software-based untraining so clear those in case some
* other mitigation like Retbleed has selected them.
*/
setup_clear_cpu_cap(X86_FEATURE_UNRET);
setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
}
} else {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
@ -2638,6 +2663,13 @@ static void __init srso_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
/*
* There is no need for RSB filling: entry_ibpb() ensures
* all predictions, including the RSB, are invalidated,
* regardless of IBPB implementation.
*/
setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
}
} else {
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");

View File

@ -1443,6 +1443,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
boot_cpu_has(X86_FEATURE_HYPERVISOR)))
setup_force_cpu_bug(X86_BUG_BHI);
if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET))
setup_force_cpu_bug(X86_BUG_IBPB_NO_RET);
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;

View File

@ -207,7 +207,7 @@ static inline bool rdt_get_mb_table(struct rdt_resource *r)
return false;
}
static bool __get_mem_config_intel(struct rdt_resource *r)
static __init bool __get_mem_config_intel(struct rdt_resource *r)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
union cpuid_0x10_3_eax eax;
@ -241,7 +241,7 @@ static bool __get_mem_config_intel(struct rdt_resource *r)
return true;
}
static bool __rdt_get_mem_config_amd(struct rdt_resource *r)
static __init bool __rdt_get_mem_config_amd(struct rdt_resource *r)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
u32 eax, ebx, ecx, edx, subleaf;

View File

@ -29,10 +29,10 @@
* hardware. The allocated bandwidth percentage is rounded to the next
* control step available on the hardware.
*/
static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
static bool bw_validate(char *buf, u32 *data, struct rdt_resource *r)
{
unsigned long bw;
int ret;
u32 bw;
/*
* Only linear delay values is supported for current Intel SKUs.
@ -42,16 +42,21 @@ static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
return false;
}
ret = kstrtoul(buf, 10, &bw);
ret = kstrtou32(buf, 10, &bw);
if (ret) {
rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf);
rdt_last_cmd_printf("Invalid MB value %s\n", buf);
return false;
}
if ((bw < r->membw.min_bw || bw > r->default_ctrl) &&
!is_mba_sc(r)) {
rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", bw,
r->membw.min_bw, r->default_ctrl);
/* Nothing else to do if software controller is enabled. */
if (is_mba_sc(r)) {
*data = bw;
return true;
}
if (bw < r->membw.min_bw || bw > r->default_ctrl) {
rdt_last_cmd_printf("MB value %u out of range [%d,%d]\n",
bw, r->membw.min_bw, r->default_ctrl);
return false;
}
@ -65,7 +70,7 @@ int parse_bw(struct rdt_parse_data *data, struct resctrl_schema *s,
struct resctrl_staged_config *cfg;
u32 closid = data->rdtgrp->closid;
struct rdt_resource *r = s->res;
unsigned long bw_val;
u32 bw_val;
cfg = &d->staged_config[s->conf_type];
if (cfg->have_new_ctrl) {

View File

@ -37,6 +37,7 @@
#include <asm/apic.h>
#include <asm/apicdef.h>
#include <asm/hypervisor.h>
#include <asm/mtrr.h>
#include <asm/tlb.h>
#include <asm/cpuidle_haltpoll.h>
#include <asm/ptrace.h>
@ -980,6 +981,9 @@ static void __init kvm_init_platform(void)
}
kvmclock_init();
x86_platform.apic_post_init = kvm_apic_init;
/* Set WB as the default cache mode for SEV-SNP and TDX */
mtrr_overwrite_state(NULL, 0, MTRR_TYPE_WRBACK);
}
#if defined(CONFIG_AMD_MEM_ENCRYPT)

View File

@ -1556,6 +1556,17 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
{
bool flush = false;
/*
* To prevent races with vCPUs faulting in a gfn using stale data,
* zapping a gfn range must be protected by mmu_invalidate_in_progress
* (and mmu_invalidate_seq). The only exception is memslot deletion;
* in that case, SRCU synchronization ensures that SPTEs are zapped
* after all vCPUs have unlocked SRCU, guaranteeing that vCPUs see the
* invalid slot.
*/
lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
lockdep_is_held(&kvm->slots_lock));
if (kvm_memslots_have_rmaps(kvm))
flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
range->start, range->end,
@ -1884,14 +1895,10 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp)
if (is_obsolete_sp((_kvm), (_sp))) { \
} else
#define for_each_gfn_valid_sp(_kvm, _sp, _gfn) \
#define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \
for_each_valid_sp(_kvm, _sp, \
&(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \
if ((_sp)->gfn != (_gfn)) {} else
#define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \
for_each_gfn_valid_sp(_kvm, _sp, _gfn) \
if (!sp_has_gptes(_sp)) {} else
if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else
static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
{
@ -7063,15 +7070,15 @@ static void kvm_mmu_zap_memslot_pages_and_flush(struct kvm *kvm,
/*
* Since accounting information is stored in struct kvm_arch_memory_slot,
* shadow pages deletion (e.g. unaccount_shadowed()) requires that all
* gfns with a shadow page have a corresponding memslot. Do so before
* the memslot goes away.
* all MMU pages that are shadowing guest PTEs must be zapped before the
* memslot is deleted, as freeing such pages after the memslot is freed
* will result in use-after-free, e.g. in unaccount_shadowed().
*/
for (i = 0; i < slot->npages; i++) {
struct kvm_mmu_page *sp;
gfn_t gfn = slot->base_gfn + i;
for_each_gfn_valid_sp(kvm, sp, gfn)
for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn)
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {

View File

@ -63,8 +63,12 @@ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
u64 pdpte;
int ret;
/*
* Note, nCR3 is "assumed" to be 32-byte aligned, i.e. the CPU ignores
* nCR3[4:0] when loading PDPTEs from memory.
*/
ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte,
offset_in_page(cr3) + index * 8, 8);
(cr3 & GENMASK(11, 5)) + index * 8, 8);
if (ret)
return 0;
return pdpte;

View File

@ -4888,9 +4888,6 @@ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
vmx->hv_deadline_tsc = -1;
kvm_set_cr8(vcpu, 0);
vmx_segment_cache_clear(vmx);
kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS);
seg_setup(VCPU_SREG_CS);
vmcs_write16(GUEST_CS_SELECTOR, 0xf000);
vmcs_writel(GUEST_CS_BASE, 0xffff0000ul);
@ -4917,6 +4914,9 @@ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
vmcs_writel(GUEST_IDTR_BASE, 0);
vmcs_write32(GUEST_IDTR_LIMIT, 0xffff);
vmx_segment_cache_clear(vmx);
kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS);
vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
vmcs_write32(GUEST_INTERRUPTIBILITY_INFO, 0);
vmcs_writel(GUEST_PENDING_DBG_EXCEPTIONS, 0);

View File

@ -4310,6 +4310,12 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
/* mark the queue as mq asap */
q->mq_ops = set->ops;
/*
* ->tag_set has to be setup before initialize hctx, which cpuphp
* handler needs it for checking queue mapping
*/
q->tag_set = set;
if (blk_mq_alloc_ctxs(q))
goto err_exit;
@ -4328,8 +4334,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
q->tag_set = set;
q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT;
INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work);

View File

@ -219,8 +219,8 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr,
data->got_token = true;
smp_wmb();
list_del_init(&curr->entry);
wake_up_process(data->task);
list_del_init_careful(&curr->entry);
return 1;
}

View File

@ -106,8 +106,7 @@ static struct elevator_type *__elevator_find(const char *name)
return NULL;
}
static struct elevator_type *elevator_find_get(struct request_queue *q,
const char *name)
static struct elevator_type *elevator_find_get(const char *name)
{
struct elevator_type *e;
@ -551,7 +550,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
static inline bool elv_support_iosched(struct request_queue *q)
{
if (!queue_is_mq(q) ||
(q->tag_set && (q->tag_set->flags & BLK_MQ_F_NO_SCHED)))
(q->tag_set->flags & BLK_MQ_F_NO_SCHED))
return false;
return true;
}
@ -562,14 +561,14 @@ static inline bool elv_support_iosched(struct request_queue *q)
*/
static struct elevator_type *elevator_get_default(struct request_queue *q)
{
if (q->tag_set && q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT)
if (q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT)
return NULL;
if (q->nr_hw_queues != 1 &&
!blk_mq_is_shared_tags(q->tag_set->flags))
return NULL;
return elevator_find_get(q, "mq-deadline");
return elevator_find_get("mq-deadline");
}
/*
@ -697,7 +696,7 @@ static int elevator_change(struct request_queue *q, const char *elevator_name)
if (q->elevator && elevator_match(q->elevator->type, elevator_name))
return 0;
e = elevator_find_get(q, elevator_name);
e = elevator_find_get(elevator_name);
if (!e)
return -EINVAL;
ret = elevator_switch(q, e);
@ -709,13 +708,21 @@ int elv_iosched_load_module(struct gendisk *disk, const char *buf,
size_t count)
{
char elevator_name[ELV_NAME_MAX];
struct elevator_type *found;
const char *name;
if (!elv_support_iosched(disk->queue))
return -EOPNOTSUPP;
strscpy(elevator_name, buf, sizeof(elevator_name));
name = strstrip(elevator_name);
request_module("%s-iosched", strstrip(elevator_name));
spin_lock(&elv_list_lock);
found = __elevator_find(name);
spin_unlock(&elv_list_lock);
if (!found)
request_module("%s-iosched", name);
return 0;
}

View File

@ -373,7 +373,7 @@ found:
q->cra_flags |= CRYPTO_ALG_DEAD;
alg = test->adult;
if (list_empty(&alg->cra_list))
if (crypto_is_dead(alg))
goto complete;
if (err == -ECANCELED)

View File

@ -1940,7 +1940,7 @@ static int __alg_test_hash(const struct hash_testvec *vecs,
atfm = crypto_alloc_ahash(driver, type, mask);
if (IS_ERR(atfm)) {
if (PTR_ERR(atfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: hash: failed to allocate transform for %s: %ld\n",
driver, PTR_ERR(atfm));
return PTR_ERR(atfm);
@ -2706,7 +2706,7 @@ static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
tfm = crypto_alloc_aead(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: aead: failed to allocate transform for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -3285,7 +3285,7 @@ static int alg_test_skcipher(const struct alg_test_desc *desc,
tfm = crypto_alloc_skcipher(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: skcipher: failed to allocate transform for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -3700,7 +3700,7 @@ static int alg_test_cipher(const struct alg_test_desc *desc,
tfm = crypto_alloc_cipher(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
printk(KERN_ERR "alg: cipher: Failed to load transform for "
"%s: %ld\n", driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -3726,7 +3726,7 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
acomp = crypto_alloc_acomp(driver, type, mask);
if (IS_ERR(acomp)) {
if (PTR_ERR(acomp) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(acomp));
return PTR_ERR(acomp);
@ -3740,7 +3740,7 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
comp = crypto_alloc_comp(driver, type, mask);
if (IS_ERR(comp)) {
if (PTR_ERR(comp) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: comp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(comp));
return PTR_ERR(comp);
@ -3818,7 +3818,7 @@ static int alg_test_cprng(const struct alg_test_desc *desc, const char *driver,
rng = crypto_alloc_rng(driver, type, mask);
if (IS_ERR(rng)) {
if (PTR_ERR(rng) == -ENOENT)
return -ENOENT;
return 0;
printk(KERN_ERR "alg: cprng: Failed to load transform for %s: "
"%ld\n", driver, PTR_ERR(rng));
return PTR_ERR(rng);
@ -3846,12 +3846,11 @@ static int drbg_cavs_test(const struct drbg_testvec *test, int pr,
drng = crypto_alloc_rng(driver, type, mask);
if (IS_ERR(drng)) {
kfree_sensitive(buf);
if (PTR_ERR(drng) == -ENOENT)
goto out_no_rng;
return 0;
printk(KERN_ERR "alg: drbg: could not allocate DRNG handle for "
"%s\n", driver);
out_no_rng:
kfree_sensitive(buf);
return PTR_ERR(drng);
}
@ -4095,7 +4094,7 @@ static int alg_test_kpp(const struct alg_test_desc *desc, const char *driver,
tfm = crypto_alloc_kpp(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: kpp: Failed to load tfm for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -4325,7 +4324,7 @@ static int alg_test_akcipher(const struct alg_test_desc *desc,
tfm = crypto_alloc_akcipher(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: akcipher: Failed to load tfm for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);

View File

@ -496,7 +496,7 @@ static int encode_addr_size_pairs(struct dma_xfer *xfer, struct wrapper_list *wr
nents = sgt->nents;
nents_dma = nents;
*size = QAIC_MANAGE_EXT_MSG_LENGTH - msg_hdr_len - sizeof(**out_trans);
for_each_sgtable_sg(sgt, sg, i) {
for_each_sgtable_dma_sg(sgt, sg, i) {
*size -= sizeof(*asp);
/* Save 1K for possible follow-up transactions. */
if (*size < SZ_1K) {

View File

@ -184,7 +184,7 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
nents = 0;
size = size ? size : PAGE_SIZE;
for (sg = sgt_in->sgl; sg; sg = sg_next(sg)) {
for_each_sgtable_dma_sg(sgt_in, sg, j) {
len = sg_dma_len(sg);
if (!len)
@ -221,7 +221,7 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
/* copy relevant sg node and fix page and length */
sgn = sgf;
for_each_sgtable_sg(sgt, sg, j) {
for_each_sgtable_dma_sg(sgt, sg, j) {
memcpy(sg, sgn, sizeof(*sg));
if (sgn == sgf) {
sg_dma_address(sg) += offf;
@ -301,7 +301,7 @@ static int encode_reqs(struct qaic_device *qdev, struct bo_slice *slice,
* fence.
*/
dev_addr = req->dev_addr;
for_each_sgtable_sg(slice->sgt, sg, i) {
for_each_sgtable_dma_sg(slice->sgt, sg, i) {
slice->reqs[i].cmd = cmd;
slice->reqs[i].src_addr = cpu_to_le64(slice->dir == DMA_TO_DEVICE ?
sg_dma_address(sg) : dev_addr);

View File

@ -1364,7 +1364,6 @@ extern struct bio_set drbd_io_bio_set;
extern struct mutex resources_mutex;
extern int conn_lowest_minor(struct drbd_connection *connection);
extern enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor);
extern void drbd_destroy_device(struct kref *kref);
extern void drbd_delete_device(struct drbd_device *device);

View File

@ -471,20 +471,6 @@ void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait)
wait_for_completion(&thi->stop);
}
int conn_lowest_minor(struct drbd_connection *connection)
{
struct drbd_peer_device *peer_device;
int vnr = 0, minor = -1;
rcu_read_lock();
peer_device = idr_get_next(&connection->peer_devices, &vnr);
if (peer_device)
minor = device_to_minor(peer_device->device);
rcu_read_unlock();
return minor;
}
#ifdef CONFIG_SMP
/*
* drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs

View File

@ -2380,10 +2380,19 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
* TODO: provide forward progress for RECOVERY handler, so that
* unprivileged device can benefit from it
*/
if (info.flags & UBLK_F_UNPRIVILEGED_DEV)
if (info.flags & UBLK_F_UNPRIVILEGED_DEV) {
info.flags &= ~(UBLK_F_USER_RECOVERY_REISSUE |
UBLK_F_USER_RECOVERY);
/*
* For USER_COPY, we depends on userspace to fill request
* buffer by pwrite() to ublk char device, which can't be
* used for unprivileged device
*/
if (info.flags & UBLK_F_USER_COPY)
return -EINVAL;
}
/* the created device is always owned by current user */
ublk_store_owner_uid_gid(&info.owner_uid, &info.owner_gid);

View File

@ -1345,10 +1345,15 @@ static int btusb_submit_intr_urb(struct hci_dev *hdev, gfp_t mem_flags)
if (!urb)
return -ENOMEM;
/* Use maximum HCI Event size so the USB stack handles
* ZPL/short-transfer automatically.
*/
size = HCI_MAX_EVENT_SIZE;
if (le16_to_cpu(data->udev->descriptor.idVendor) == 0x0a12 &&
le16_to_cpu(data->udev->descriptor.idProduct) == 0x0001)
/* Fake CSR devices don't seem to support sort-transter */
size = le16_to_cpu(data->intr_ep->wMaxPacketSize);
else
/* Use maximum HCI Event size so the USB stack handles
* ZPL/short-transfer automatically.
*/
size = HCI_MAX_EVENT_SIZE;
buf = kmalloc(size, mem_flags);
if (!buf) {
@ -4038,7 +4043,6 @@ static void btusb_disconnect(struct usb_interface *intf)
static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
{
struct btusb_data *data = usb_get_intfdata(intf);
int err;
BT_DBG("intf %p", intf);
@ -4051,16 +4055,6 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
if (data->suspend_count++)
return 0;
/* Notify Host stack to suspend; this has to be done before stopping
* the traffic since the hci_suspend_dev itself may generate some
* traffic.
*/
err = hci_suspend_dev(data->hdev);
if (err) {
data->suspend_count--;
return err;
}
spin_lock_irq(&data->txlock);
if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) {
set_bit(BTUSB_SUSPENDING, &data->flags);
@ -4068,7 +4062,6 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
} else {
spin_unlock_irq(&data->txlock);
data->suspend_count--;
hci_resume_dev(data->hdev);
return -EBUSY;
}
@ -4189,8 +4182,6 @@ static int btusb_resume(struct usb_interface *intf)
spin_unlock_irq(&data->txlock);
schedule_work(&data->work);
hci_resume_dev(data->hdev);
return 0;
failed:

View File

@ -2313,7 +2313,7 @@ static int cdrom_ioctl_media_changed(struct cdrom_device_info *cdi,
return -EINVAL;
/* Prevent arg from speculatively bypassing the length check */
barrier_nospec();
arg = array_index_nospec(arg, cdi->capacity);
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)

View File

@ -473,7 +473,7 @@ clk_multiple_parents_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ctx->parents_ctx[0].rate = DUMMY_CLOCK_RATE_1;
ret = clk_hw_register(NULL, &ctx->parents_ctx[0].hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[0].hw);
if (ret)
return ret;
@ -481,7 +481,7 @@ clk_multiple_parents_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ctx->parents_ctx[1].rate = DUMMY_CLOCK_RATE_2;
ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[1].hw);
if (ret)
return ret;
@ -489,23 +489,13 @@ clk_multiple_parents_mux_test_init(struct kunit *test)
ctx->hw.init = CLK_HW_INIT_PARENTS("test-mux", parents,
&clk_multiple_parents_mux_ops,
CLK_SET_RATE_PARENT);
ret = clk_hw_register(NULL, &ctx->hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->hw);
if (ret)
return ret;
return 0;
}
static void
clk_multiple_parents_mux_test_exit(struct kunit *test)
{
struct clk_multiple_parent_ctx *ctx = test->priv;
clk_hw_unregister(&ctx->hw);
clk_hw_unregister(&ctx->parents_ctx[0].hw);
clk_hw_unregister(&ctx->parents_ctx[1].hw);
}
/*
* Test that for a clock with multiple parents, clk_get_parent()
* actually returns the current one.
@ -561,18 +551,18 @@ clk_test_multiple_parents_mux_set_range_set_parent_get_rate(struct kunit *test)
{
struct clk_multiple_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent1, *parent2;
unsigned long rate;
int ret;
kunit_skip(test, "This needs to be fixed in the core.");
parent1 = clk_hw_get_clk(&ctx->parents_ctx[0].hw, NULL);
parent1 = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[0].hw, NULL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent1);
KUNIT_ASSERT_TRUE(test, clk_is_match(clk_get_parent(clk), parent1));
parent2 = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
parent2 = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[1].hw, NULL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent2);
ret = clk_set_rate(parent1, DUMMY_CLOCK_RATE_1);
@ -593,10 +583,6 @@ clk_test_multiple_parents_mux_set_range_set_parent_get_rate(struct kunit *test)
KUNIT_ASSERT_GT(test, rate, 0);
KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 - 1000);
KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
clk_put(parent2);
clk_put(parent1);
clk_put(clk);
}
static struct kunit_case clk_multiple_parents_mux_test_cases[] = {
@ -617,7 +603,6 @@ static struct kunit_suite
clk_multiple_parents_mux_test_suite = {
.name = "clk-multiple-parents-mux-test",
.init = clk_multiple_parents_mux_test_init,
.exit = clk_multiple_parents_mux_test_exit,
.test_cases = clk_multiple_parents_mux_test_cases,
};
@ -637,29 +622,20 @@ clk_orphan_transparent_multiple_parent_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ctx->parents_ctx[1].rate = DUMMY_CLOCK_INIT_RATE;
ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[1].hw);
if (ret)
return ret;
ctx->hw.init = CLK_HW_INIT_PARENTS("test-orphan-mux", parents,
&clk_multiple_parents_mux_ops,
CLK_SET_RATE_PARENT);
ret = clk_hw_register(NULL, &ctx->hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->hw);
if (ret)
return ret;
return 0;
}
static void
clk_orphan_transparent_multiple_parent_mux_test_exit(struct kunit *test)
{
struct clk_multiple_parent_ctx *ctx = test->priv;
clk_hw_unregister(&ctx->hw);
clk_hw_unregister(&ctx->parents_ctx[1].hw);
}
/*
* Test that, for a mux whose current parent hasn't been registered yet and is
* thus orphan, clk_get_parent() will return NULL.
@ -912,7 +888,7 @@ clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(st
{
struct clk_multiple_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent;
unsigned long rate;
int ret;
@ -921,7 +897,7 @@ clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(st
clk_hw_set_rate_range(hw, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
parent = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[1].hw, NULL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
ret = clk_set_parent(clk, parent);
@ -931,9 +907,6 @@ clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(st
KUNIT_ASSERT_GT(test, rate, 0);
KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
clk_put(parent);
clk_put(clk);
}
static struct kunit_case clk_orphan_transparent_multiple_parent_mux_test_cases[] = {
@ -961,7 +934,6 @@ static struct kunit_case clk_orphan_transparent_multiple_parent_mux_test_cases[]
static struct kunit_suite clk_orphan_transparent_multiple_parent_mux_test_suite = {
.name = "clk-orphan-transparent-multiple-parent-mux-test",
.init = clk_orphan_transparent_multiple_parent_mux_test_init,
.exit = clk_orphan_transparent_multiple_parent_mux_test_exit,
.test_cases = clk_orphan_transparent_multiple_parent_mux_test_cases,
};
@ -986,7 +958,7 @@ static int clk_single_parent_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ret = clk_hw_register(NULL, &ctx->parent_ctx.hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parent_ctx.hw);
if (ret)
return ret;
@ -994,7 +966,7 @@ static int clk_single_parent_mux_test_init(struct kunit *test)
&clk_dummy_single_parent_ops,
CLK_SET_RATE_PARENT);
ret = clk_hw_register(NULL, &ctx->hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->hw);
if (ret)
return ret;
@ -1060,7 +1032,7 @@ clk_test_single_parent_mux_set_range_disjoint_child_last(struct kunit *test)
{
struct clk_single_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent;
int ret;
@ -1074,8 +1046,6 @@ clk_test_single_parent_mux_set_range_disjoint_child_last(struct kunit *test)
ret = clk_set_rate_range(clk, 3000, 4000);
KUNIT_EXPECT_LT(test, ret, 0);
clk_put(clk);
}
/*
@ -1092,7 +1062,7 @@ clk_test_single_parent_mux_set_range_disjoint_parent_last(struct kunit *test)
{
struct clk_single_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent;
int ret;
@ -1106,8 +1076,6 @@ clk_test_single_parent_mux_set_range_disjoint_parent_last(struct kunit *test)
ret = clk_set_rate_range(parent, 3000, 4000);
KUNIT_EXPECT_LT(test, ret, 0);
clk_put(clk);
}
/*
@ -1238,7 +1206,6 @@ static struct kunit_suite
clk_single_parent_mux_test_suite = {
.name = "clk-single-parent-mux-test",
.init = clk_single_parent_mux_test_init,
.exit = clk_single_parent_mux_test_exit,
.test_cases = clk_single_parent_mux_test_cases,
};

View File

@ -439,7 +439,7 @@ unsigned long rockchip_clk_find_max_clk_id(struct rockchip_clk_branch *list,
if (list->id > max)
max = list->id;
if (list->child && list->child->id > max)
max = list->id;
max = list->child->id;
}
return max;

View File

@ -1155,6 +1155,7 @@ static const struct of_device_id exynosautov920_cmu_of_match[] = {
.compatible = "samsung,exynosautov920-cmu-peric0",
.data = &peric0_cmu_info,
},
{ }
};
static struct platform_driver exynosautov920_cmu_driver __refdata = {

View File

@ -536,11 +536,16 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy)
static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
{
u32 max_limit_perf, min_limit_perf, lowest_perf;
u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf;
struct amd_cpudata *cpudata = policy->driver_data;
max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq);
min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq);
if (cpudata->boost_supported && !policy->boost_enabled)
max_perf = READ_ONCE(cpudata->nominal_perf);
else
max_perf = READ_ONCE(cpudata->highest_perf);
max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq);
min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq);
lowest_perf = READ_ONCE(cpudata->lowest_perf);
if (min_limit_perf < lowest_perf)
@ -1201,11 +1206,21 @@ static int amd_pstate_register_driver(int mode)
return -EINVAL;
cppc_state = mode;
ret = amd_pstate_enable(true);
if (ret) {
pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n",
ret);
amd_pstate_driver_cleanup();
return ret;
}
ret = cpufreq_register_driver(current_pstate_driver);
if (ret) {
amd_pstate_driver_cleanup();
return ret;
}
return 0;
}
@ -1496,10 +1511,13 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
u64 value;
s16 epp;
max_perf = READ_ONCE(cpudata->highest_perf);
if (cpudata->boost_supported && !policy->boost_enabled)
max_perf = READ_ONCE(cpudata->nominal_perf);
else
max_perf = READ_ONCE(cpudata->highest_perf);
min_perf = READ_ONCE(cpudata->lowest_perf);
max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq);
min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq);
max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq);
min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq);
if (min_limit_perf < min_perf)
min_limit_perf = min_perf;

View File

@ -947,7 +947,7 @@ struct ahash_alg mv_md5_alg = {
.base = {
.cra_name = "md5",
.cra_driver_name = "mv-md5",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1018,7 +1018,7 @@ struct ahash_alg mv_sha1_alg = {
.base = {
.cra_name = "sha1",
.cra_driver_name = "mv-sha1",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1092,7 +1092,7 @@ struct ahash_alg mv_sha256_alg = {
.base = {
.cra_name = "sha256",
.cra_driver_name = "mv-sha256",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1302,7 +1302,7 @@ struct ahash_alg mv_ahmac_md5_alg = {
.base = {
.cra_name = "hmac(md5)",
.cra_driver_name = "mv-hmac-md5",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1373,7 +1373,7 @@ struct ahash_alg mv_ahmac_sha1_alg = {
.base = {
.cra_name = "hmac(sha1)",
.cra_driver_name = "mv-hmac-sha1",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1444,7 +1444,7 @@ struct ahash_alg mv_ahmac_sha256_alg = {
.base = {
.cra_name = "hmac(sha256)",
.cra_driver_name = "mv-hmac-sha256",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,

View File

@ -1391,11 +1391,12 @@ static struct ep93xx_dma_engine *ep93xx_dma_of_probe(struct platform_device *pde
INIT_LIST_HEAD(&dma_dev->channels);
for (i = 0; i < edma->num_channels; i++) {
struct ep93xx_dma_chan *edmac = &edma->channels[i];
int len;
edmac->chan.device = dma_dev;
edmac->regs = devm_platform_ioremap_resource(pdev, i);
if (IS_ERR(edmac->regs))
return edmac->regs;
return ERR_CAST(edmac->regs);
edmac->irq = fwnode_irq_get(dev_fwnode(dev), i);
if (edmac->irq < 0)
@ -1404,9 +1405,11 @@ static struct ep93xx_dma_engine *ep93xx_dma_of_probe(struct platform_device *pde
edmac->edma = edma;
if (edma->m2m)
snprintf(dma_clk_name, sizeof(dma_clk_name), "m2m%u", i);
len = snprintf(dma_clk_name, sizeof(dma_clk_name), "m2m%u", i);
else
snprintf(dma_clk_name, sizeof(dma_clk_name), "m2p%u", i);
len = snprintf(dma_clk_name, sizeof(dma_clk_name), "m2p%u", i);
if (len >= sizeof(dma_clk_name))
return ERR_PTR(-ENOBUFS);
edmac->clk = devm_clk_get(dev, dma_clk_name);
if (IS_ERR(edmac->clk)) {

View File

@ -481,11 +481,16 @@ static int ffa_msg_send_direct_req2(u16 src_id, u16 dst_id, const uuid_t *uuid,
struct ffa_send_direct_data2 *data)
{
u32 src_dst_ids = PACK_TARGET_INFO(src_id, dst_id);
union {
uuid_t uuid;
__le64 regs[2];
} uuid_regs = { .uuid = *uuid };
ffa_value_t ret, args = {
.a0 = FFA_MSG_SEND_DIRECT_REQ2, .a1 = src_dst_ids,
.a0 = FFA_MSG_SEND_DIRECT_REQ2,
.a1 = src_dst_ids,
.a2 = le64_to_cpu(uuid_regs.regs[0]),
.a3 = le64_to_cpu(uuid_regs.regs[1]),
};
export_uuid((u8 *)&args.a2, uuid);
memcpy((void *)&args + offsetof(ffa_value_t, a4), data, sizeof(*data));
invoke_ffa_fn(args, &ret);
@ -496,7 +501,7 @@ static int ffa_msg_send_direct_req2(u16 src_id, u16 dst_id, const uuid_t *uuid,
return ffa_to_linux_errno((int)ret.a2);
if (ret.a0 == FFA_MSG_SEND_DIRECT_RESP2) {
memcpy(data, &ret.a4, sizeof(*data));
memcpy(data, (void *)&ret + offsetof(ffa_value_t, a4), sizeof(*data));
return 0;
}

View File

@ -2976,10 +2976,8 @@ static struct scmi_debug_info *scmi_debugfs_common_setup(struct scmi_info *info)
dbg->top_dentry = top_dentry;
if (devm_add_action_or_reset(info->dev,
scmi_debugfs_common_cleanup, dbg)) {
scmi_debugfs_common_cleanup(dbg);
scmi_debugfs_common_cleanup, dbg))
return NULL;
}
return dbg;
}

View File

@ -1,8 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
scmi_transport_mailbox-objs := mailbox.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o
# Keep before scmi_transport_mailbox.o to allow precedence
# while matching the compatible.
scmi_transport_smc-objs := smc.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += scmi_transport_smc.o
scmi_transport_mailbox-objs := mailbox.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o
scmi_transport_optee-objs := optee.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += scmi_transport_optee.o
scmi_transport_virtio-objs := virtio.o

View File

@ -25,6 +25,7 @@
* @chan_platform_receiver: Optional Platform Receiver mailbox unidirectional channel
* @cinfo: SCMI channel info
* @shmem: Transmit/Receive shared memory area
* @chan_lock: Lock that prevents multiple xfers from being queued
*/
struct scmi_mailbox {
struct mbox_client cl;
@ -33,6 +34,7 @@ struct scmi_mailbox {
struct mbox_chan *chan_platform_receiver;
struct scmi_chan_info *cinfo;
struct scmi_shared_mem __iomem *shmem;
struct mutex chan_lock;
};
#define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl)
@ -238,6 +240,7 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
cinfo->transport_info = smbox;
smbox->cinfo = cinfo;
mutex_init(&smbox->chan_lock);
return 0;
}
@ -267,13 +270,23 @@ static int mailbox_send_message(struct scmi_chan_info *cinfo,
struct scmi_mailbox *smbox = cinfo->transport_info;
int ret;
/*
* The mailbox layer has its own queue. However the mailbox queue
* confuses the per message SCMI timeouts since the clock starts when
* the message is submitted into the mailbox queue. So when multiple
* messages are queued up the clock starts on all messages instead of
* only the one inflight.
*/
mutex_lock(&smbox->chan_lock);
ret = mbox_send_message(smbox->chan, xfer);
/* mbox_send_message returns non-negative value on success */
if (ret < 0) {
mutex_unlock(&smbox->chan_lock);
return ret;
}
/* mbox_send_message returns non-negative value on success, so reset */
if (ret > 0)
ret = 0;
return ret;
return 0;
}
static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,
@ -281,13 +294,10 @@ static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,
{
struct scmi_mailbox *smbox = cinfo->transport_info;
/*
* NOTE: we might prefer not to need the mailbox ticker to manage the
* transfer queueing since the protocol layer queues things by itself.
* Unfortunately, we have to kick the mailbox framework after we have
* received our message.
*/
mbox_client_txdone(smbox->chan, ret);
/* Release channel */
mutex_unlock(&smbox->chan_lock);
}
static void mailbox_fetch_response(struct scmi_chan_info *cinfo,

View File

@ -265,7 +265,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
/* Only a single BO list is allowed to simplify handling. */
if (p->bo_list)
ret = -EINVAL;
goto free_partial_kdata;
ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
if (ret)

View File

@ -1635,11 +1635,9 @@ int amdgpu_gfx_sysfs_isolation_shader_init(struct amdgpu_device *adev)
{
int r;
if (!amdgpu_sriov_vf(adev)) {
r = device_create_file(adev->dev, &dev_attr_enforce_isolation);
if (r)
return r;
}
r = device_create_file(adev->dev, &dev_attr_enforce_isolation);
if (r)
return r;
r = device_create_file(adev->dev, &dev_attr_run_cleaner_shader);
if (r)
@ -1650,8 +1648,7 @@ int amdgpu_gfx_sysfs_isolation_shader_init(struct amdgpu_device *adev)
void amdgpu_gfx_sysfs_isolation_shader_fini(struct amdgpu_device *adev)
{
if (!amdgpu_sriov_vf(adev))
device_remove_file(adev->dev, &dev_attr_enforce_isolation);
device_remove_file(adev->dev, &dev_attr_enforce_isolation);
device_remove_file(adev->dev, &dev_attr_run_cleaner_shader);
}

View File

@ -1203,8 +1203,10 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
r = amdgpu_ring_init(adev, ring, 1024, NULL, 0,
AMDGPU_RING_PRIO_DEFAULT, NULL);
if (r)
if (r) {
amdgpu_mes_unlock(&adev->mes);
goto clean_up_memory;
}
amdgpu_mes_ring_to_queue_props(adev, ring, &qprops);
@ -1237,7 +1239,6 @@ clean_up_ring:
amdgpu_ring_fini(ring);
clean_up_memory:
kfree(ring);
amdgpu_mes_unlock(&adev->mes);
return r;
}

View File

@ -621,7 +621,7 @@ static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe)
if (amdgpu_mes_log_enable) {
mes_set_hw_res_pkt.enable_mes_event_int_logging = 1;
mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr;
mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr + pipe * AMDGPU_MES_LOG_BUFFER_SIZE;
}
return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe,
@ -1336,7 +1336,7 @@ static int mes_v12_0_sw_init(void *handle)
adev->mes.kiq_hw_fini = &mes_v12_0_kiq_hw_fini;
adev->mes.enable_legacy_queue_map = true;
adev->mes.event_log_size = AMDGPU_MES_LOG_BUFFER_SIZE;
adev->mes.event_log_size = adev->enable_uni_mes ? (AMDGPU_MAX_MES_PIPES * AMDGPU_MES_LOG_BUFFER_SIZE) : AMDGPU_MES_LOG_BUFFER_SIZE;
r = amdgpu_mes_init(adev);
if (r)

Some files were not shown because too many files have changed in this diff Show More