regulator: init_data handling update

Merge series from Jerome Brunet <jbrunet@baylibre.com>:

This patchset groups the regulator patches around the init_data topic
discussed on pmbus write protect patchset [1]

[1]: https://lore.kernel.org/r/20240920-pmbus-wp-v1-0-d679ef31c483@baylibre.com
This commit is contained in:
Mark Brown 2024-10-22 23:18:45 +01:00
commit 5ddc236d09
No known key found for this signature in database
GPG Key ID: 24D68B725D5487D0
829 changed files with 11952 additions and 8064 deletions

View File

@ -73,6 +73,8 @@ Andrey Ryabinin <ryabinin.a.a@gmail.com> <aryabinin@virtuozzo.com>
Andrzej Hajda <andrzej.hajda@intel.com> <a.hajda@samsung.com>
André Almeida <andrealmeid@igalia.com> <andrealmeid@collabora.com>
Andy Adamson <andros@citi.umich.edu>
Andy Chiu <andybnac@gmail.com> <andy.chiu@sifive.com>
Andy Chiu <andybnac@gmail.com> <taochiu@synology.com>
Andy Shevchenko <andy@kernel.org> <andy@smile.org.ua>
Andy Shevchenko <andy@kernel.org> <ext-andriy.shevchenko@nokia.com>
Anilkumar Kolli <quic_akolli@quicinc.com> <akolli@codeaurora.org>
@ -203,12 +205,16 @@ Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> <ezequiel@collabora.com>
Faith Ekstrand <faith.ekstrand@collabora.com> <jason@jlekstrand.net>
Faith Ekstrand <faith.ekstrand@collabora.com> <jason.ekstrand@intel.com>
Faith Ekstrand <faith.ekstrand@collabora.com> <jason.ekstrand@collabora.com>
Fangrui Song <i@maskray.me> <maskray@google.com>
Felipe W Damasio <felipewd@terra.com.br>
Felix Kuhling <fxkuehl@gmx.de>
Felix Moeller <felix@derklecks.de>
Fenglin Wu <quic_fenglinw@quicinc.com> <fenglinw@codeaurora.org>
Filipe Lautert <filipe@icewall.org>
Finn Thain <fthain@linux-m68k.org> <fthain@telegraphics.com.au>
Fiona Behrens <me@kloenk.dev>
Fiona Behrens <me@kloenk.dev> <me@kloenk.de>
Fiona Behrens <me@kloenk.dev> <fin@nyantec.com>
Franck Bui-Huu <vagabon.xyz@gmail.com>
Frank Rowand <frowand.list@gmail.com> <frank.rowand@am.sony.com>
Frank Rowand <frowand.list@gmail.com> <frank.rowand@sony.com>

54
CREDITS
View File

@ -1358,10 +1358,6 @@ D: Major kbuild rework during the 2.5 cycle
D: ISDN Maintainer
S: USA
N: Gerrit Renker
E: gerrit@erg.abdn.ac.uk
D: DCCP protocol support.
N: Philip Gladstone
E: philip@gladstonefamily.net
D: Kernel / timekeeping stuff
@ -1677,11 +1673,6 @@ W: http://www.carumba.com/
D: bug toaster (A1 sauce makes all the difference)
D: Random linux hacker
N: James Hogan
E: jhogan@kernel.org
D: Metag architecture maintainer
D: TZ1090 SoC maintainer
N: Tim Hockin
E: thockin@hockin.org
W: http://www.hockin.org/~thockin
@ -1697,6 +1688,11 @@ D: hwmon subsystem maintainer
D: i2c-sis96x and i2c-stub SMBus drivers
S: USA
N: James Hogan
E: jhogan@kernel.org
D: Metag architecture maintainer
D: TZ1090 SoC maintainer
N: Dirk Hohndel
E: hohndel@suse.de
D: The XFree86[tm] Project
@ -1872,6 +1868,10 @@ S: K osmidomkum 723
S: 160 00 Praha 6
S: Czech Republic
N: Seth Jennings
E: sjenning@redhat.com
D: Creation and maintenance of zswap
N: Jeremy Kerr
D: Maintainer of SPU File System
@ -2188,19 +2188,6 @@ N: Mike Kravetz
E: mike.kravetz@oracle.com
D: Maintenance and development of the hugetlb subsystem
N: Seth Jennings
E: sjenning@redhat.com
D: Creation and maintenance of zswap
N: Dan Streetman
E: ddstreet@ieee.org
D: Maintenance and development of zswap
D: Creation and maintenance of the zpool API
N: Vitaly Wool
E: vitaly.wool@konsulko.com
D: Maintenance and development of zswap
N: Andreas S. Krebs
E: akrebs@altavista.net
D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
@ -3191,6 +3178,11 @@ N: Ken Pizzini
E: ken@halcyon.com
D: CDROM driver "sonycd535" (Sony CDU-535/531)
N: Mathieu Poirier
E: mathieu.poirier@linaro.org
D: CoreSight kernel subsystem, Maintainer 2014-2022
D: Perf tool support for CoreSight
N: Stelian Pop
E: stelian@popies.net
P: 1024D/EDBB6147 7B36 0E07 04BC 11DC A7A0 D3F7 7185 9E7A EDBB 6147
@ -3300,6 +3292,10 @@ S: Schlossbergring 9
S: 79098 Freiburg
S: Germany
N: Gerrit Renker
E: gerrit@erg.abdn.ac.uk
D: DCCP protocol support.
N: Thomas Renninger
E: trenn@suse.de
D: cpupowerutils
@ -3576,11 +3572,6 @@ D: several improvements to system programs
S: Oldenburg
S: Germany
N: Mathieu Poirier
E: mathieu.poirier@linaro.org
D: CoreSight kernel subsystem, Maintainer 2014-2022
D: Perf tool support for CoreSight
N: Robert Schwebel
E: robert@schwebel.de
W: https://www.schwebel.de
@ -3771,6 +3762,11 @@ S: Chr. Winthersvej 1 B, st.th.
S: DK-1860 Frederiksberg C
S: Denmark
N: Dan Streetman
E: ddstreet@ieee.org
D: Maintenance and development of zswap
D: Creation and maintenance of the zpool API
N: Drew Sullivan
E: drew@ss.org
W: http://www.ss.org/
@ -4286,6 +4282,10 @@ S: Pipers Way
S: Swindon. SN3 1RJ
S: England
N: Vitaly Wool
E: vitaly.wool@konsulko.com
D: Maintenance and development of zswap
N: Chris Wright
E: chrisw@sous-sol.org
D: hacking on LSM framework and security modules.

View File

@ -223,7 +223,10 @@ are signed through the PKCS#7 message format to enforce some level of
authorization of the policies (prohibiting an attacker from gaining
unconstrained root, and deploying an "allow all" policy). These
policies must be signed by a certificate that chains to the
``SYSTEM_TRUSTED_KEYRING``. With openssl, the policy can be signed by::
``SYSTEM_TRUSTED_KEYRING``, or to the secondary and/or platform keyrings if
``CONFIG_IPE_POLICY_SIG_SECONDARY_KEYRING`` and/or
``CONFIG_IPE_POLICY_SIG_PLATFORM_KEYRING`` are enabled, respectively.
With openssl, the policy can be signed by::
openssl smime -sign \
-in "$MY_POLICY" \
@ -266,7 +269,7 @@ in the kernel. This file is write-only and accepts a PKCS#7 signed
policy. Two checks will always be performed on this policy: First, the
``policy_names`` must match with the updated version and the existing
version. Second the updated policy must have a policy version greater than
or equal to the currently-running version. This is to prevent rollback attacks.
the currently-running version. This is to prevent rollback attacks.
The ``delete`` file is used to remove a policy that is no longer needed.
This file is write-only and accepts a value of ``1`` to delete the policy.

View File

@ -12,7 +12,10 @@ Pkeys Userspace (PKU) is a feature which can be found on:
* Intel server CPUs, Skylake and later
* Intel client CPUs, Tiger Lake (11th Gen Core) and later
* Future AMD CPUs
* arm64 CPUs implementing the Permission Overlay Extension (FEAT_S1POE)
x86_64
======
Pkeys work by dedicating 4 previously Reserved bits in each page table entry to
a "protection key", giving 16 possible keys.
@ -28,6 +31,22 @@ register. The feature is only available in 64-bit mode, even though there is
theoretically space in the PAE PTEs. These permissions are enforced on data
access only and have no effect on instruction fetches.
arm64
=====
Pkeys use 3 bits in each page table entry, to encode a "protection key index",
giving 8 possible keys.
Protections for each key are defined with a per-CPU user-writable system
register (POR_EL0). This is a 64-bit register encoding read, write and execute
overlay permissions for each protection key index.
Being a CPU register, POR_EL0 is inherently thread-local, potentially giving
each thread a different set of protections from every other thread.
Unlike x86_64, the protection key permissions also apply to instruction
fetches.
Syscalls
========
@ -38,11 +57,10 @@ There are 3 system calls which directly interact with pkeys::
int pkey_mprotect(unsigned long start, size_t len,
unsigned long prot, int pkey);
Before a pkey can be used, it must first be allocated with
pkey_alloc(). An application calls the WRPKRU instruction
directly in order to change access permissions to memory covered
with a key. In this example WRPKRU is wrapped by a C function
called pkey_set().
Before a pkey can be used, it must first be allocated with pkey_alloc(). An
application writes to the architecture specific CPU register directly in order
to change access permissions to memory covered with a key. In this example
this is wrapped by a C function called pkey_set().
::
int real_prot = PROT_READ|PROT_WRITE;
@ -64,9 +82,9 @@ is no longer in use::
munmap(ptr, PAGE_SIZE);
pkey_free(pkey);
.. note:: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
An example implementation can be found in
tools/testing/selftests/x86/protection_keys.c.
.. note:: pkey_set() is a wrapper around writing to the CPU register.
Example implementations can be found in
tools/testing/selftests/mm/pkey-{arm64,powerpc,x86}.h
Behavior
========
@ -96,3 +114,7 @@ with a read()::
The kernel will send a SIGSEGV in both cases, but si_code will be set
to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
the plain mprotect() permissions are violated.
Note that kernel accesses from a kthread (such as io_uring) will use a default
value for the protection key register and so will not be consistent with
userspace's value of the register or mprotect().

View File

@ -0,0 +1,54 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/elgin,jg10309-01.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Elgin JG10309-01 SPI-controlled display
maintainers:
- Fabio Estevam <festevam@gmail.com>
description: |
The Elgin JG10309-01 SPI-controlled display is used on the RV1108-Elgin-r1
board and is a custom display.
allOf:
- $ref: /schemas/spi/spi-peripheral-props.yaml#
properties:
compatible:
const: elgin,jg10309-01
reg:
maxItems: 1
spi-max-frequency:
maximum: 24000000
spi-cpha: true
spi-cpol: true
required:
- compatible
- reg
- spi-cpha
- spi-cpol
additionalProperties: false
examples:
- |
spi {
#address-cells = <1>;
#size-cells = <0>;
display@0 {
compatible = "elgin,jg10309-01";
reg = <0>;
spi-max-frequency = <24000000>;
spi-cpha;
spi-cpol;
};
};

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/iio/dac/adi,ad5686.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Analog Devices AD5360 and similar DACs
title: Analog Devices AD5360 and similar SPI DACs
maintainers:
- Michael Hennerich <michael.hennerich@analog.com>
@ -12,41 +12,22 @@ maintainers:
properties:
compatible:
oneOf:
- description: SPI devices
enum:
- adi,ad5310r
- adi,ad5672r
- adi,ad5674r
- adi,ad5676
- adi,ad5676r
- adi,ad5679r
- adi,ad5681r
- adi,ad5682r
- adi,ad5683
- adi,ad5683r
- adi,ad5684
- adi,ad5684r
- adi,ad5685r
- adi,ad5686
- adi,ad5686r
- description: I2C devices
enum:
- adi,ad5311r
- adi,ad5337r
- adi,ad5338r
- adi,ad5671r
- adi,ad5675r
- adi,ad5691r
- adi,ad5692r
- adi,ad5693
- adi,ad5693r
- adi,ad5694
- adi,ad5694r
- adi,ad5695r
- adi,ad5696
- adi,ad5696r
enum:
- adi,ad5310r
- adi,ad5672r
- adi,ad5674r
- adi,ad5676
- adi,ad5676r
- adi,ad5679r
- adi,ad5681r
- adi,ad5682r
- adi,ad5683
- adi,ad5683r
- adi,ad5684
- adi,ad5684r
- adi,ad5685r
- adi,ad5686
- adi,ad5686r
reg:
maxItems: 1

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/iio/dac/adi,ad5696.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Analog Devices AD5696 and similar multi-channel DACs
title: Analog Devices AD5696 and similar I2C multi-channel DACs
maintainers:
- Michael Auchter <michael.auchter@ni.com>
@ -16,6 +16,7 @@ properties:
compatible:
enum:
- adi,ad5311r
- adi,ad5337r
- adi,ad5338r
- adi,ad5671r
- adi,ad5675r

View File

@ -82,9 +82,6 @@ allOf:
enum:
- fsl,ls1043a-extirq
- fsl,ls1046a-extirq
- fsl,ls1088a-extirq
- fsl,ls2080a-extirq
- fsl,lx2160a-extirq
then:
properties:
interrupt-map:
@ -95,6 +92,29 @@ allOf:
- const: 0xf
- const: 0
- if:
properties:
compatible:
contains:
enum:
- fsl,ls1088a-extirq
- fsl,ls2080a-extirq
- fsl,lx2160a-extirq
# The driver(drivers/irqchip/irq-ls-extirq.c) have not use standard DT
# function to parser interrupt-map. So it doesn't consider '#address-size'
# in parent interrupt controller, such as GIC.
#
# When dt-binding verify interrupt-map, item data matrix is spitted at
# incorrect position. Remove interrupt-map restriction because it always
# wrong.
then:
properties:
interrupt-map-mask:
items:
- const: 0xf
- const: 0
additionalProperties: false
examples:

View File

@ -113,7 +113,7 @@ properties:
msi-parent:
deprecated: true
$ref: /schemas/types.yaml#/definitions/phandle
maxItems: 1
description:
Describes the MSI controller node handling message
interrupts for the MC. When there is no translation

View File

@ -26,6 +26,7 @@ properties:
- brcm,asp-v2.1-mdio
- brcm,asp-v2.2-mdio
- brcm,unimac-mdio
- brcm,bcm6846-mdio
reg:
minItems: 1

View File

@ -101,8 +101,6 @@ properties:
- domintech,dmard09
# DMARD10: 3-axis Accelerometer
- domintech,dmard10
# Elgin SPI-controlled LCD
- elgin,jg10309-01
# MMA7660FC: 3-Axis Orientation/Motion Detection Sensor
- fsl,mma7660
# MMA8450Q: Xtrinsic Low-power, 3-axis Xtrinsic Accelerometer

View File

@ -208,7 +208,7 @@ The filesystem must arrange to `cancel
such `reservations
<https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/>`_
because writeback will not consume the reservation.
The ``iomap_file_buffered_write_punch_delalloc`` can be called from a
The ``iomap_write_delalloc_release`` can be called from a
``->iomap_end`` function to find all the clean areas of the folios
caching a fresh (``IOMAP_F_NEW``) delalloc mapping.
It takes the ``invalidate_lock``.

View File

@ -7,26 +7,26 @@ The DAMON subsystem covers the files that are listed in 'DATA ACCESS MONITOR'
section of 'MAINTAINERS' file.
The mailing lists for the subsystem are damon@lists.linux.dev and
linux-mm@kvack.org. Patches should be made against the mm-unstable `tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>` whenever possible and posted to
the mailing lists.
linux-mm@kvack.org. Patches should be made against the `mm-unstable tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ whenever possible and posted
to the mailing lists.
SCM Trees
---------
There are multiple Linux trees for DAMON development. Patches under
development or testing are queued in `damon/next
<https://git.kernel.org/sj/h/damon/next>` by the DAMON maintainer.
<https://git.kernel.org/sj/h/damon/next>`_ by the DAMON maintainer.
Sufficiently reviewed patches will be queued in `mm-unstable
<https://git.kernel.org/akpm/mm/h/mm-unstable>` by the memory management
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ by the memory management
subsystem maintainer. After more sufficient tests, the patches will be queued
in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>` , and finally
in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_, and finally
pull-requested to the mainline by the memory management subsystem maintainer.
Note again the patches for mm-unstable `tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>` are queued by the memory
Note again the patches for `mm-unstable tree
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ are queued by the memory
management subsystem maintainer. If the patches requires some patches in
damon/next `tree <https://git.kernel.org/sj/h/damon/next>` which not yet merged
`damon/next tree <https://git.kernel.org/sj/h/damon/next>`_ which not yet merged
in mm-unstable, please make sure the requirement is clearly specified.
Submit checklist addendum
@ -37,25 +37,25 @@ When making DAMON changes, you should do below.
- Build changes related outputs including kernel and documents.
- Ensure the builds introduce no new errors or warnings.
- Run and ensure no new failures for DAMON `selftests
<https://github.com/awslabs/damon-tests/blob/master/corr/run.sh#L49>` and
<https://github.com/damonitor/damon-tests/blob/master/corr/run.sh#L49>`_ and
`kunittests
<https://github.com/awslabs/damon-tests/blob/master/corr/tests/kunit.sh>`.
<https://github.com/damonitor/damon-tests/blob/master/corr/tests/kunit.sh>`_.
Further doing below and putting the results will be helpful.
- Run `damon-tests/corr
<https://github.com/awslabs/damon-tests/tree/master/corr>` for normal
<https://github.com/damonitor/damon-tests/tree/master/corr>`_ for normal
changes.
- Run `damon-tests/perf
<https://github.com/awslabs/damon-tests/tree/master/perf>` for performance
<https://github.com/damonitor/damon-tests/tree/master/perf>`_ for performance
changes.
Key cycle dates
---------------
Patches can be sent anytime. Key cycle dates of the `mm-unstable
<https://git.kernel.org/akpm/mm/h/mm-unstable>` and `mm-stable
<https://git.kernel.org/akpm/mm/h/mm-stable>` trees depend on the memory
<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable
<https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory
management subsystem maintainer.
Review cadence
@ -72,13 +72,13 @@ Mailing tool
Like many other Linux kernel subsystems, DAMON uses the mailing lists
(damon@lists.linux.dev and linux-mm@kvack.org) as the major communication
channel. There is a simple tool called `HacKerMaiL
<https://github.com/damonitor/hackermail>` (``hkml``), which is for people who
<https://github.com/damonitor/hackermail>`_ (``hkml``), which is for people who
are not very familiar with the mailing lists based communication. The tool
could be particularly helpful for DAMON community members since it is developed
and maintained by DAMON maintainer. The tool is also officially announced to
support DAMON and general Linux kernel development workflow.
In other words, `hkml <https://github.com/damonitor/hackermail>` is a mailing
In other words, `hkml <https://github.com/damonitor/hackermail>`_ is a mailing
tool for DAMON community, which DAMON maintainer is committed to support.
Please feel free to try and report issues or feature requests for the tool to
the maintainer.
@ -98,8 +98,8 @@ slots, and attendees should reserve one of those at least 24 hours before the
time slot, by reaching out to the maintainer.
Schedules and available reservation time slots are available at the Google `doc
<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`.
<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`_.
There is also a public Google `calendar
<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`
<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`_
that has the events. Anyone can subscribe it. DAMON maintainer will also
provide periodic reminder to the mailing list (damon@lists.linux.dev).

View File

@ -9,7 +9,7 @@ segments between trusted peers. It adds a new TCP header option with
a Message Authentication Code (MAC). MACs are produced from the content
of a TCP segment using a hashing function with a password known to both peers.
The intent of TCP-AO is to deprecate TCP-MD5 providing better security,
key rotation and support for variety of hashing algorithms.
key rotation and support for a variety of hashing algorithms.
1. Introduction
===============
@ -164,9 +164,9 @@ A: It should not, no action needs to be performed [7.5.2.e]::
is not available, no action is required (RNextKeyID of a received
segment needs to match the MKTs SendID).
Q: How current_key is set and when does it change? It is a user-triggered
change, or is it by a request from the remote peer? Is it set by the user
explicitly, or by a matching rule?
Q: How is current_key set, and when does it change? Is it a user-triggered
change, or is it triggered by a request from the remote peer? Is it set by the
user explicitly, or by a matching rule?
A: current_key is set by RNextKeyID [6.1]::
@ -233,8 +233,8 @@ always have one current_key [3.3]::
Q: Can a non-TCP-AO connection become a TCP-AO-enabled one?
A: No: for already established non-TCP-AO connection it would be impossible
to switch using TCP-AO as the traffic key generation requires the initial
A: No: for an already established non-TCP-AO connection it would be impossible
to switch to using TCP-AO, as the traffic key generation requires the initial
sequence numbers. Paraphrasing, starting using TCP-AO would require
re-establishing the TCP connection.
@ -292,7 +292,7 @@ no transparency is really needed and modern BGP daemons already have
Linux provides a set of ``setsockopt()s`` and ``getsockopt()s`` that let
userspace manage TCP-AO on a per-socket basis. In order to add/delete MKTs
``TCP_AO_ADD_KEY`` and ``TCP_AO_DEL_KEY`` TCP socket options must be used
``TCP_AO_ADD_KEY`` and ``TCP_AO_DEL_KEY`` TCP socket options must be used.
It is not allowed to add a key on an established non-TCP-AO connection
as well as to remove the last key from TCP-AO connection.
@ -361,7 +361,7 @@ not implemented.
4. ``setsockopt()`` vs ``accept()`` race
========================================
In contrast with TCP-MD5 established connection which has just one key,
In contrast with an established TCP-MD5 connection which has just one key,
TCP-AO connections may have many keys, which means that accepted connections
on a listen socket may have any amount of keys as well. As copying all those
keys on a first properly signed SYN would make the request socket bigger, that
@ -374,7 +374,7 @@ keys from sockets that were already established, but not yet ``accept()``'ed,
hanging in the accept queue.
The reverse is valid as well: if userspace adds a new key for a peer on
a listener socket, the established sockets in accept queue won't
a listener socket, the established sockets in the accept queue won't
have the new keys.
At this moment, the resolution for the two races:
@ -382,7 +382,7 @@ At this moment, the resolution for the two races:
and ``setsockopt(TCP_AO_DEL_KEY)`` vs ``accept()`` is delegated to userspace.
This means that it's expected that userspace would check the MKTs on the socket
that was returned by ``accept()`` to verify that any key rotation that
happened on listen socket is reflected on the newly established connection.
happened on the listen socket is reflected on the newly established connection.
This is a similar "do-nothing" approach to TCP-MD5 from the kernel side and
may be changed later by introducing new flags to ``tcp_ao_add``

View File

@ -355,6 +355,8 @@ just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
.. _rcs:
Local variable ordering ("reverse xmas tree", "RCS")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -391,6 +393,21 @@ APIs and helpers, especially scoped iterators. However, direct use of
``__free()`` within networking core and drivers is discouraged.
Similar guidance applies to declaring variables mid-function.
Clean-up patches
~~~~~~~~~~~~~~~~
Netdev discourages patches which perform simple clean-ups, which are not in
the context of other work. For example:
* Addressing ``checkpatch.pl`` warnings
* Addressing :ref:`Local variable ordering<rcs>` issues
* Conversions to device-managed APIs (``devm_`` helpers)
This is because it is felt that the churn that such changes produce comes
at a greater cost than the value of such clean-ups.
Conversely, spelling and grammar fixes are not discouraged.
Resending after review
~~~~~~~~~~~~~~~~~~~~~~

View File

@ -30,10 +30,13 @@ tree as a dedicated branch covering multiple subsystems.
The main SoC tree is housed on git.kernel.org:
https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git/
Maintainers
-----------
Clearly this is quite a wide range of topics, which no one person, or even
small group of people are capable of maintaining. Instead, the SoC subsystem
is comprised of many submaintainers, each taking care of individual platforms
and driver subdirectories.
is comprised of many submaintainers (platform maintainers), each taking care of
individual platforms and driver subdirectories.
In this regard, "platform" usually refers to a series of SoCs from a given
vendor, for example, Nvidia's series of Tegra SoCs. Many submaintainers operate
on a vendor level, responsible for multiple product lines. For several reasons,
@ -43,14 +46,43 @@ MAINTAINERS file.
Most of these submaintainers have their own trees where they stage patches,
sending pull requests to the main SoC tree. These trees are usually, but not
always, listed in MAINTAINERS. The main SoC maintainers can be reached via the
alias soc@kernel.org if there is no platform-specific maintainer, or if they
are unresponsive.
always, listed in MAINTAINERS.
What the SoC tree is not, however, is a location for architecture-specific code
changes. Each architecture has its own maintainers that are responsible for
architectural details, CPU errata and the like.
Submitting Patches for Given SoC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All typical platform related patches should be sent via SoC submaintainers
(platform-specific maintainers). This includes also changes to per-platform or
shared defconfigs (scripts/get_maintainer.pl might not provide correct
addresses in such case).
Submitting Patches to the Main SoC Maintainers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The main SoC maintainers can be reached via the alias soc@kernel.org only in
following cases:
1. There are no platform-specific maintainers.
2. Platform-specific maintainers are unresponsive.
3. Introducing a completely new SoC platform. Such new SoC work should be sent
first to common mailing lists, pointed out by scripts/get_maintainer.pl, for
community review. After positive community review, work should be sent to
soc@kernel.org in one patchset containing new arch/foo/Kconfig entry, DTS
files, MAINTAINERS file entry and optionally initial drivers with their
Devicetree bindings. The MAINTAINERS file entry should list new
platform-specific maintainers, who are going to be responsible for handling
patches for the platform from now on.
Note that the soc@kernel.org is usually not the place to discuss the patches,
thus work sent to this address should be already considered as acceptable by
the community.
Information for (new) Submaintainers
------------------------------------

View File

@ -66,7 +66,7 @@ BPF scheduler and reverts all tasks back to CFS.
.. code-block:: none
# make -j16 -C tools/sched_ext
# tools/sched_ext/scx_simple
# tools/sched_ext/build/bin/scx_simple
local=0 global=3
local=5 global=24
local=9 global=44

View File

@ -258,12 +258,6 @@ L: linux-acenic@sunsite.dk
S: Maintained
F: drivers/net/ethernet/alteon/acenic*
ACER ASPIRE 1 EMBEDDED CONTROLLER DRIVER
M: Nikita Travkin <nikita@trvn.ru>
S: Maintained
F: Documentation/devicetree/bindings/platform/acer,aspire1-ec.yaml
F: drivers/platform/arm64/acer-aspire1-ec.c
ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER
M: Peter Kaestle <peter@piie.net>
L: platform-driver-x86@vger.kernel.org
@ -888,7 +882,6 @@ F: drivers/staging/media/sunxi/cedrus/
ALPHA PORT
M: Richard Henderson <richard.henderson@linaro.org>
M: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
M: Matt Turner <mattst88@gmail.com>
L: linux-alpha@vger.kernel.org
S: Odd Fixes
@ -1761,8 +1754,8 @@ F: include/uapi/linux/if_arcnet.h
ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS)
M: Arnd Bergmann <arnd@arndb.de>
M: Olof Johansson <olof@lixom.net>
M: soc@kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: soc@lists.linux.dev
S: Maintained
P: Documentation/process/maintainer-soc.rst
C: irc://irc.libera.chat/armlinux
@ -2263,12 +2256,6 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-ep93xx/ts72xx.c
ARM/CIRRUS LOGIC CLPS711X ARM ARCHITECTURE
M: Alexander Shiyan <shc_work@mail.ru>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Odd Fixes
N: clps711x
ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE
M: Hartley Sweeten <hsweeten@visionengravers.com>
M: Alexander Sverdlin <alexander.sverdlin@gmail.com>
@ -3815,14 +3802,6 @@ F: drivers/video/backlight/
F: include/linux/backlight.h
F: include/linux/pwm_backlight.h
BAIKAL-T1 PVT HARDWARE MONITOR DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-hwmon@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/hwmon/baikal,bt1-pvt.yaml
F: Documentation/hwmon/bt1-pvt.rst
F: drivers/hwmon/bt1-pvt.[ch]
BARCO P50 GPIO DRIVER
M: Santosh Kumar Yadav <santoshkumar.yadav@barco.com>
M: Peter Korsgaard <peter.korsgaard@barco.com>
@ -6476,7 +6455,6 @@ F: drivers/mtd/nand/raw/denali*
DESIGNWARE EDMA CORE IP DRIVER
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
R: Serge Semin <fancer.lancer@gmail.com>
L: dmaengine@vger.kernel.org
S: Maintained
F: drivers/dma/dw-edma/
@ -9759,14 +9737,6 @@ F: drivers/gpio/gpiolib-cdev.c
F: include/uapi/linux/gpio.h
F: tools/gpio/
GRE DEMULTIPLEXER DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
F: include/net/gre.h
F: net/ipv4/gre_demux.c
F: net/ipv4/gre_offload.c
GRETH 10/100/1G Ethernet MAC device driver
M: Andreas Larsson <andreas@gaisler.com>
L: netdev@vger.kernel.org
@ -10270,7 +10240,7 @@ F: Documentation/devicetree/bindings/arm/hisilicon/low-pin-count.yaml
F: drivers/bus/hisi_lpc.c
HISILICON NETWORK SUBSYSTEM 3 DRIVER (HNS3)
M: Yisen Zhuang <yisen.zhuang@huawei.com>
M: Jian Shen <shenjian15@huawei.com>
M: Salil Mehta <salil.mehta@huawei.com>
M: Jijie Shao <shaojijie@huawei.com>
L: netdev@vger.kernel.org
@ -10279,7 +10249,7 @@ W: http://www.hisilicon.com
F: drivers/net/ethernet/hisilicon/hns3/
HISILICON NETWORK SUBSYSTEM DRIVER
M: Yisen Zhuang <yisen.zhuang@huawei.com>
M: Jian Shen <shenjian15@huawei.com>
M: Salil Mehta <salil.mehta@huawei.com>
L: netdev@vger.kernel.org
S: Maintained
@ -11283,10 +11253,10 @@ F: security/integrity/
F: security/integrity/ima/
INTEGRITY POLICY ENFORCEMENT (IPE)
M: Fan Wu <wufan@linux.microsoft.com>
M: Fan Wu <wufan@kernel.org>
L: linux-security-module@vger.kernel.org
S: Supported
T: git https://github.com/microsoft/ipe.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wufan/ipe.git
F: Documentation/admin-guide/LSM/ipe.rst
F: Documentation/security/ipe.rst
F: scripts/ipe/
@ -11604,6 +11574,16 @@ F: drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
F: drivers/crypto/intel/keembay/ocs-hcu.c
F: drivers/crypto/intel/keembay/ocs-hcu.h
INTEL LA JOLLA COVE ADAPTER (LJCA) USB I/O EXPANDER DRIVERS
M: Wentong Wu <wentong.wu@intel.com>
M: Sakari Ailus <sakari.ailus@linux.intel.com>
S: Maintained
F: drivers/gpio/gpio-ljca.c
F: drivers/i2c/busses/i2c-ljca.c
F: drivers/spi/spi-ljca.c
F: drivers/usb/misc/usb-ljca.c
F: include/linux/usb/ljca.h
INTEL MANAGEMENT ENGINE (mei)
M: Tomas Winkler <tomas.winkler@intel.com>
L: linux-kernel@vger.kernel.org
@ -12242,6 +12222,7 @@ R: Dmitry Vyukov <dvyukov@google.com>
R: Vincenzo Frascino <vincenzo.frascino@arm.com>
L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
F: arch/*/include/asm/*kasan.h
F: arch/*/mm/kasan_init*
@ -12265,6 +12246,7 @@ R: Dmitry Vyukov <dvyukov@google.com>
R: Andrey Konovalov <andreyknvl@gmail.com>
L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kcov.rst
F: include/linux/kcov.h
F: include/uapi/linux/kcov.h
@ -12944,49 +12926,29 @@ LIBATA PATA ARASAN COMPACT FLASH CONTROLLER
M: Viresh Kumar <vireshk@kernel.org>
L: linux-ide@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
F: drivers/ata/pata_arasan_cf.c
F: include/linux/pata_arasan_cf_data.h
LIBATA PATA DRIVERS
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: linux-ide@vger.kernel.org
F: drivers/ata/ata_*.c
F: drivers/ata/pata_*.c
LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS
M: Linus Walleij <linus.walleij@linaro.org>
L: linux-ide@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
F: drivers/ata/pata_ftide010.c
F: drivers/ata/sata_gemini.c
F: drivers/ata/sata_gemini.h
LIBATA SATA AHCI PLATFORM devices support
M: Hans de Goede <hdegoede@redhat.com>
M: Jens Axboe <axboe@kernel.dk>
L: linux-ide@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
F: drivers/ata/ahci_platform.c
F: drivers/ata/libahci_platform.c
F: include/linux/ahci_platform.h
LIBATA SATA AHCI SYNOPSYS DWC CONTROLLER DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-ide@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata.git
F: Documentation/devicetree/bindings/ata/baikal,bt1-ahci.yaml
F: Documentation/devicetree/bindings/ata/snps,dwc-ahci.yaml
F: drivers/ata/ahci_dwc.c
LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER
M: Mikael Pettersson <mikpelinux@gmail.com>
L: linux-ide@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
F: drivers/ata/sata_promise.*
LIBATA SUBSYSTEM (Serial and Parallel ATA drivers)
@ -14178,16 +14140,6 @@ S: Maintained
T: git git://linuxtv.org/media_tree.git
F: drivers/media/platform/nxp/imx-pxp.[ch]
MEDIA DRIVERS FOR ASCOT2E
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/ascot2e*
MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS
M: Jasmin Jessich <jasmin@anw.at>
L: linux-media@vger.kernel.org
@ -14196,16 +14148,6 @@ W: https://linuxtv.org
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/cxd2099*
MEDIA DRIVERS FOR CXD2841ER
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/cxd2841er*
MEDIA DRIVERS FOR CXD2880
M: Yasunari Takiguchi <Yasunari.Takiguchi@sony.com>
L: linux-media@vger.kernel.org
@ -14250,35 +14192,6 @@ F: drivers/media/platform/nxp/imx-mipi-csis.c
F: drivers/media/platform/nxp/imx7-media-csi.c
F: drivers/media/platform/nxp/imx8mq-mipi-csi2.c
MEDIA DRIVERS FOR HELENE
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/helene*
MEDIA DRIVERS FOR HORUS3A
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/horus3a*
MEDIA DRIVERS FOR LNBH25
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/lnbh25*
MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS
L: linux-media@vger.kernel.org
S: Orphan
@ -14286,16 +14199,6 @@ W: https://linuxtv.org
T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/mxl5xx*
MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices
M: Sergey Kozlov <serjk@netup.ru>
M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
S: Supported
W: https://linuxtv.org
W: http://netup.tv/
T: git git://linuxtv.org/media_tree.git
F: drivers/media/pci/netup_unidvb/*
MEDIA DRIVERS FOR NVIDIA TEGRA - VDE
M: Dmitry Osipenko <digetx@gmail.com>
L: linux-media@vger.kernel.org
@ -14913,9 +14816,10 @@ N: include/linux/page[-_]*
MEMORY MAPPING
M: Andrew Morton <akpm@linux-foundation.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
S: Maintained
W: http://www.linux-mm.org
@ -14938,13 +14842,6 @@ F: drivers/mtd/
F: include/linux/mtd/
F: include/uapi/mtd/
MEMSENSING MICROSYSTEMS MSA311 DRIVER
M: Dmitry Rokosov <ddrokosov@sberdevices.ru>
L: linux-iio@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/iio/accel/memsensing,msa311.yaml
F: drivers/iio/accel/msa311.c
MEN A21 WATCHDOG DRIVER
M: Johannes Thumshirn <morbidrsa@gmail.com>
L: linux-watchdog@vger.kernel.org
@ -15278,7 +15175,6 @@ F: drivers/tty/serial/8250/8250_pci1xxxx.c
MICROCHIP POLARFIRE FPGA DRIVERS
M: Conor Dooley <conor.dooley@microchip.com>
R: Vladimir Georgiev <v.georgiev@metrotek.ru>
L: linux-fpga@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml
@ -15533,17 +15429,6 @@ F: arch/mips/
F: drivers/platform/mips/
F: include/dt-bindings/mips/
MIPS BAIKAL-T1 PLATFORM
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-mips@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/bus/baikal,bt1-*.yaml
F: Documentation/devicetree/bindings/clock/baikal,bt1-*.yaml
F: drivers/bus/bt1-*.c
F: drivers/clk/baikal-t1/
F: drivers/memory/bt1-l2-ctl.c
F: drivers/mtd/maps/physmap-bt1-rom.[ch]
MIPS BOSTON DEVELOPMENT BOARD
M: Paul Burton <paulburton@kernel.org>
L: linux-mips@vger.kernel.org
@ -15556,7 +15441,6 @@ F: include/dt-bindings/clock/boston-clock.h
MIPS CORE DRIVERS
M: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-mips@vger.kernel.org
S: Supported
F: drivers/bus/mips_cdmm.c
@ -16092,6 +15976,7 @@ F: include/uapi/linux/net_dropmon.h
F: net/core/drop_monitor.c
NETWORKING DRIVERS
M: Andrew Lunn <andrew+netdev@lunn.ch>
M: "David S. Miller" <davem@davemloft.net>
M: Eric Dumazet <edumazet@google.com>
M: Jakub Kicinski <kuba@kernel.org>
@ -16201,8 +16086,19 @@ F: lib/random32.c
F: net/
F: tools/net/
F: tools/testing/selftests/net/
X: Documentation/networking/mac80211-injection.rst
X: Documentation/networking/mac80211_hwsim/
X: Documentation/networking/regulatory.rst
X: include/net/cfg80211.h
X: include/net/ieee80211_radiotap.h
X: include/net/iw_handler.h
X: include/net/mac80211.h
X: include/net/wext.h
X: net/9p/
X: net/bluetooth/
X: net/mac80211/
X: net/rfkill/
X: net/wireless/
NETWORKING [IPSEC]
M: Steffen Klassert <steffen.klassert@secunet.com>
@ -16512,12 +16408,6 @@ F: include/linux/ntb.h
F: include/linux/ntb_transport.h
F: tools/testing/selftests/ntb/
NTB IDT DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: ntb@lists.linux.dev
S: Supported
F: drivers/ntb/hw/idt/
NTB INTEL DRIVER
M: Dave Jiang <dave.jiang@intel.com>
L: ntb@lists.linux.dev
@ -18538,13 +18428,6 @@ F: drivers/pps/
F: include/linux/pps*.h
F: include/uapi/linux/pps.h
PPTP DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
W: http://sourceforge.net/projects/accel-pptp
F: drivers/net/ppp/pptp.c
PRESSURE STALL INFORMATION (PSI)
M: Johannes Weiner <hannes@cmpxchg.org>
M: Suren Baghdasaryan <surenb@google.com>
@ -19518,6 +19401,14 @@ S: Maintained
F: Documentation/tools/rtla/
F: tools/tracing/rtla/
Real-time Linux (PREEMPT_RT)
M: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
M: Clark Williams <clrkwllms@kernel.org>
M: Steven Rostedt <rostedt@goodmis.org>
L: linux-rt-devel@lists.linux.dev
S: Supported
K: PREEMPT_RT
REALTEK AUDIO CODECS
M: Oder Chiou <oder_chiou@realtek.com>
S: Maintained
@ -19627,15 +19518,6 @@ S: Supported
F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml
F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET AVB DRIVER
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml
F: drivers/net/ethernet/renesas/Kconfig
F: drivers/net/ethernet/renesas/Makefile
F: drivers/net/ethernet/renesas/ravb*
RENESAS ETHERNET SWITCH DRIVER
R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
L: netdev@vger.kernel.org
@ -19685,14 +19567,6 @@ F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml
F: drivers/i2c/busses/i2c-rcar.c
F: drivers/i2c/busses/i2c-sh_mobile.c
RENESAS R-CAR SATA DRIVER
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: linux-ide@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml
F: drivers/ata/sata_rcar.c
RENESAS R-CAR THERMAL DRIVERS
M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
L: linux-renesas-soc@vger.kernel.org
@ -19768,16 +19642,6 @@ S: Supported
F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml
F: drivers/i2c/busses/i2c-rzv2m.c
RENESAS SUPERH ETHERNET DRIVER
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,ether.yaml
F: drivers/net/ethernet/renesas/Kconfig
F: drivers/net/ethernet/renesas/Makefile
F: drivers/net/ethernet/renesas/sh_eth*
F: include/linux/sh_eth.h
RENESAS USB PHY DRIVER
M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
L: linux-renesas-soc@vger.kernel.org
@ -21772,8 +21636,8 @@ F: drivers/accessibility/speakup/
SPEAR PLATFORM/CLOCK/PINCTRL SUPPORT
M: Viresh Kumar <vireshk@kernel.org>
M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>
M: soc@kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: soc@lists.linux.dev
S: Maintained
W: http://www.st.com/spear
F: arch/arm/boot/dts/st/spear*
@ -22431,19 +22295,11 @@ F: drivers/tty/serial/8250/8250_lpss.c
SYNOPSYS DESIGNWARE APB GPIO DRIVER
M: Hoan Tran <hoan@os.amperecomputing.com>
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-gpio@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/gpio/snps,dw-apb-gpio.yaml
F: drivers/gpio/gpio-dwapb.c
SYNOPSYS DESIGNWARE APB SSI DRIVER
M: Serge Semin <fancer.lancer@gmail.com>
L: linux-spi@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
F: drivers/spi/spi-dw*
SYNOPSYS DESIGNWARE AXI DMAC DRIVER
M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
S: Maintained
@ -23753,12 +23609,6 @@ L: linux-input@vger.kernel.org
S: Maintained
F: drivers/hid/hid-udraw-ps3.c
UFS FILESYSTEM
M: Evgeniy Dushistov <dushistov@mail.ru>
S: Maintained
F: Documentation/admin-guide/ufs.rst
F: fs/ufs/
UHID USERSPACE HID IO DRIVER
M: David Rheinsberg <david@readahead.eu>
L: linux-input@vger.kernel.org
@ -24061,6 +23911,7 @@ USB RAW GADGET DRIVER
R: Andrey Konovalov <andreyknvl@gmail.com>
L: linux-usb@vger.kernel.org
S: Maintained
B: https://github.com/xairy/raw-gadget/issues
F: Documentation/usb/raw-gadget.rst
F: drivers/usb/gadget/legacy/raw_gadget.c
F: include/uapi/linux/usb/raw_gadget.h
@ -24177,8 +24028,12 @@ F: drivers/usb/host/xhci*
USER DATAGRAM PROTOCOL (UDP)
M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: include/linux/udp.h
F: include/net/udp.h
F: include/trace/events/udp.h
F: include/uapi/linux/udp.h
F: net/ipv4/udp.c
F: net/ipv6/udp.c
@ -24728,9 +24583,10 @@ F: tools/testing/vsock/
VMA
M: Andrew Morton <akpm@linux-foundation.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
S: Maintained
W: https://www.linux-mm.org
@ -25404,7 +25260,7 @@ F: include/xen/arm/swiotlb-xen.h
F: include/xen/swiotlb-xen.h
XFS FILESYSTEM
M: Chandan Babu R <chandan.babu@oracle.com>
M: Carlos Maiolino <cem@kernel.org>
R: Darrick J. Wong <djwong@kernel.org>
L: linux-xfs@vger.kernel.org
S: Supported

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 12
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc4
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -838,7 +838,7 @@ config CFI_CLANG
config CFI_ICALL_NORMALIZE_INTEGERS
bool "Normalize CFI tags for integers"
depends on CFI_CLANG
depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS
depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG
help
This option normalizes the CFI tags for integer types so that all
integer types of the same size and signedness receive the same CFI
@ -851,21 +851,19 @@ config CFI_ICALL_NORMALIZE_INTEGERS
This option is necessary for using CFI with Rust. If unsure, say N.
config HAVE_CFI_ICALL_NORMALIZE_INTEGERS
def_bool !GCOV_KERNEL && !KASAN
depends on CFI_CLANG
config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG
def_bool y
depends on $(cc-option,-fsanitize=kcfi -fsanitize-cfi-icall-experimental-normalize-integers)
help
Is CFI_ICALL_NORMALIZE_INTEGERS supported with the set of compilers
currently in use?
# With GCOV/KASAN we need this fix: https://github.com/llvm/llvm-project/pull/104826
depends on CLANG_VERSION >= 190000 || (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
This option defaults to false if GCOV or KASAN is enabled, as there is
an LLVM bug that makes normalized integers tags incompatible with
KASAN and GCOV. Kconfig currently does not have the infrastructure to
detect whether your rustc compiler contains the fix for this bug, so
it is assumed that it doesn't. If your compiler has the fix, you can
explicitly enable this option in your config file. The Kconfig logic
needed to detect this will be added in a future kernel release.
config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC
def_bool y
depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG
depends on RUSTC_VERSION >= 107900
# With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373
depends on (RUSTC_LLVM_VERSION >= 190000 && RUSTC_VERSION >= 108200) || \
(!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
config CFI_PERMISSIVE
bool "Use CFI in permissive mode"

View File

@ -77,7 +77,7 @@
};
&hdmi {
hpd-gpios = <&expgpio 1 GPIO_ACTIVE_LOW>;
hpd-gpios = <&expgpio 0 GPIO_ACTIVE_LOW>;
power-domains = <&power RPI_POWER_DOMAIN_HDMI>;
status = "okay";
};

View File

@ -136,7 +136,7 @@
};
cp0_mdio_pins: cp0-mdio-pins {
marvell,pins = "mpp40", "mpp41";
marvell,pins = "mpp0", "mpp1";
marvell,function = "ge";
};

View File

@ -10,11 +10,9 @@
#include <asm/insn.h>
#include <asm/probes.h>
#define MAX_UINSN_BYTES AARCH64_INSN_SIZE
#define UPROBE_SWBP_INSN cpu_to_le32(BRK64_OPCODE_UPROBES)
#define UPROBE_SWBP_INSN_SIZE AARCH64_INSN_SIZE
#define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES
#define UPROBE_XOL_SLOT_BYTES AARCH64_INSN_SIZE
typedef __le32 uprobe_opcode_t;
@ -23,8 +21,8 @@ struct arch_uprobe_task {
struct arch_uprobe {
union {
u8 insn[MAX_UINSN_BYTES];
u8 ixol[MAX_UINSN_BYTES];
__le32 insn;
__le32 ixol;
};
struct arch_probe_insn api;
bool simulate;

View File

@ -99,10 +99,6 @@ arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *api)
aarch64_insn_is_blr(insn) ||
aarch64_insn_is_ret(insn)) {
api->handler = simulate_br_blr_ret;
} else if (aarch64_insn_is_ldr_lit(insn)) {
api->handler = simulate_ldr_literal;
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
api->handler = simulate_ldrsw_literal;
} else {
/*
* Instruction cannot be stepped out-of-line and we don't
@ -140,6 +136,17 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
probe_opcode_t insn = le32_to_cpu(*addr);
probe_opcode_t *scan_end = NULL;
unsigned long size = 0, offset = 0;
struct arch_probe_insn *api = &asi->api;
if (aarch64_insn_is_ldr_lit(insn)) {
api->handler = simulate_ldr_literal;
decoded = INSN_GOOD_NO_SLOT;
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
api->handler = simulate_ldrsw_literal;
decoded = INSN_GOOD_NO_SLOT;
} else {
decoded = arm_probe_decode_insn(insn, &asi->api);
}
/*
* If there's a symbol defined in front of and near enough to
@ -157,7 +164,6 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
else
scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
}
decoded = arm_probe_decode_insn(insn, &asi->api);
if (decoded != INSN_REJECTED && scan_end)
if (is_probed_address_atomic(addr - 1, scan_end))

View File

@ -171,17 +171,15 @@ simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs)
void __kprobes
simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
{
u64 *load_addr;
unsigned long load_addr;
int xn = opcode & 0x1f;
int disp;
disp = ldr_displacement(opcode);
load_addr = (u64 *) (addr + disp);
load_addr = addr + ldr_displacement(opcode);
if (opcode & (1 << 30)) /* x0-x30 */
set_x_reg(regs, xn, *load_addr);
set_x_reg(regs, xn, READ_ONCE(*(u64 *)load_addr));
else /* w0-w30 */
set_w_reg(regs, xn, *load_addr);
set_w_reg(regs, xn, READ_ONCE(*(u32 *)load_addr));
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}
@ -189,14 +187,12 @@ simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
void __kprobes
simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs)
{
s32 *load_addr;
unsigned long load_addr;
int xn = opcode & 0x1f;
int disp;
disp = ldr_displacement(opcode);
load_addr = (s32 *) (addr + disp);
load_addr = addr + ldr_displacement(opcode);
set_x_reg(regs, xn, *load_addr);
set_x_reg(regs, xn, READ_ONCE(*(s32 *)load_addr));
instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}

View File

@ -42,7 +42,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
return -EINVAL;
insn = *(probe_opcode_t *)(&auprobe->insn[0]);
insn = le32_to_cpu(auprobe->insn);
switch (arm_probe_decode_insn(insn, &auprobe->api)) {
case INSN_REJECTED:
@ -108,7 +108,7 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
if (!auprobe->simulate)
return false;
insn = *(probe_opcode_t *)(&auprobe->insn[0]);
insn = le32_to_cpu(auprobe->insn);
addr = instruction_pointer(regs);
if (auprobe->api.handler)

View File

@ -412,6 +412,9 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
p->thread.cpu_context.x19 = (unsigned long)args->fn;
p->thread.cpu_context.x20 = (unsigned long)args->fn_arg;
if (system_supports_poe())
p->thread.por_el0 = POR_EL0_INIT;
}
p->thread.cpu_context.pc = (unsigned long)ret_from_fork;
p->thread.cpu_context.sp = (unsigned long)childregs;

View File

@ -494,6 +494,7 @@ FixupDAR:/* Entry point for dcbx workaround. */
bctr /* jump into table */
152:
mfdar r11
mtdar r10
mtctr r11 /* restore ctr reg from DAR */
mfspr r11, SPRN_SPRG_THREAD
stw r10, DAR(r11)

View File

@ -282,6 +282,7 @@ int __init opal_event_init(void)
name, NULL);
if (rc) {
pr_warn("Error %d requesting OPAL irq %d\n", rc, (int)r->start);
kfree(name);
continue;
}
}

View File

@ -18,6 +18,7 @@
#define RV_MAX_REG_ARGS 8
#define RV_FENTRY_NINSNS 2
#define RV_FENTRY_NBYTES (RV_FENTRY_NINSNS * 4)
#define RV_KCFI_NINSNS (IS_ENABLED(CONFIG_CFI_CLANG) ? 1 : 0)
/* imm that allows emit_imm to emit max count insns */
#define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF
@ -271,7 +272,8 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
if (!is_tail_call)
emit_addiw(RV_REG_A0, RV_REG_A5, 0, ctx);
emit_jalr(RV_REG_ZERO, is_tail_call ? RV_REG_T3 : RV_REG_RA,
is_tail_call ? (RV_FENTRY_NINSNS + 1) * 4 : 0, /* skip reserved nops and TCC init */
/* kcfi, fentry and TCC init insns will be skipped on tailcall */
is_tail_call ? (RV_KCFI_NINSNS + RV_FENTRY_NINSNS + 1) * 4 : 0,
ctx);
}
@ -548,8 +550,8 @@ static void emit_atomic(u8 rd, u8 rs, s16 off, s32 imm, bool is64,
rv_lr_w(r0, 0, rd, 0, 0), ctx);
jmp_offset = ninsns_rvoff(8);
emit(rv_bne(RV_REG_T2, r0, jmp_offset >> 1), ctx);
emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 0) :
rv_sc_w(RV_REG_T3, rs, rd, 0, 0), ctx);
emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 1) :
rv_sc_w(RV_REG_T3, rs, rd, 0, 1), ctx);
jmp_offset = ninsns_rvoff(-6);
emit(rv_bne(RV_REG_T3, 0, jmp_offset >> 1), ctx);
emit(rv_fence(0x3, 0x3), ctx);

View File

@ -50,7 +50,6 @@ CONFIG_NUMA=y
CONFIG_HZ_100=y
CONFIG_CERT_STORE=y
CONFIG_EXPOLINE=y
# CONFIG_EXPOLINE_EXTERN is not set
CONFIG_EXPOLINE_AUTO=y
CONFIG_CHSC_SCH=y
CONFIG_VFIO_CCW=m
@ -95,6 +94,7 @@ CONFIG_BINFMT_MISC=m
CONFIG_ZSWAP=y
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_SLAB_BUCKETS=y
CONFIG_SLUB_STATS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_MEMORY_HOTPLUG=y
@ -426,6 +426,13 @@ CONFIG_DEVTMPFS_SAFE=y
# CONFIG_FW_LOADER is not set
CONFIG_CONNECTOR=y
CONFIG_ZRAM=y
CONFIG_ZRAM_BACKEND_LZ4=y
CONFIG_ZRAM_BACKEND_LZ4HC=y
CONFIG_ZRAM_BACKEND_ZSTD=y
CONFIG_ZRAM_BACKEND_DEFLATE=y
CONFIG_ZRAM_BACKEND_842=y
CONFIG_ZRAM_BACKEND_LZO=y
CONFIG_ZRAM_DEF_COMP_DEFLATE=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
@ -486,6 +493,7 @@ CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y
CONFIG_DM_SWITCH=m
CONFIG_DM_INTEGRITY=m
CONFIG_DM_VDO=m
@ -535,6 +543,7 @@ CONFIG_NLMON=m
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_META is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
@ -695,6 +704,7 @@ CONFIG_NFSD=m
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_CIFS=m
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
@ -740,7 +750,6 @@ CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECDSA=m
CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_SM2=m
CONFIG_CRYPTO_CURVE25519=m
CONFIG_CRYPTO_AES_TI=m
CONFIG_CRYPTO_ANUBIS=m

View File

@ -48,7 +48,6 @@ CONFIG_NUMA=y
CONFIG_HZ_100=y
CONFIG_CERT_STORE=y
CONFIG_EXPOLINE=y
# CONFIG_EXPOLINE_EXTERN is not set
CONFIG_EXPOLINE_AUTO=y
CONFIG_CHSC_SCH=y
CONFIG_VFIO_CCW=m
@ -89,6 +88,7 @@ CONFIG_BINFMT_MISC=m
CONFIG_ZSWAP=y
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_SLAB_BUCKETS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y
@ -416,6 +416,13 @@ CONFIG_DEVTMPFS_SAFE=y
# CONFIG_FW_LOADER is not set
CONFIG_CONNECTOR=y
CONFIG_ZRAM=y
CONFIG_ZRAM_BACKEND_LZ4=y
CONFIG_ZRAM_BACKEND_LZ4HC=y
CONFIG_ZRAM_BACKEND_ZSTD=y
CONFIG_ZRAM_BACKEND_DEFLATE=y
CONFIG_ZRAM_BACKEND_842=y
CONFIG_ZRAM_BACKEND_LZO=y
CONFIG_ZRAM_DEF_COMP_DEFLATE=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
@ -476,6 +483,7 @@ CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y
CONFIG_DM_SWITCH=m
CONFIG_DM_INTEGRITY=m
CONFIG_DM_VDO=m
@ -525,6 +533,7 @@ CONFIG_NLMON=m
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_META is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
@ -682,6 +691,7 @@ CONFIG_NFSD=m
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_CIFS=m
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
@ -726,7 +736,6 @@ CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECDSA=m
CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_SM2=m
CONFIG_CRYPTO_CURVE25519=m
CONFIG_CRYPTO_AES_TI=m
CONFIG_CRYPTO_ANUBIS=m
@ -767,6 +776,7 @@ CONFIG_CRYPTO_LZ4=m
CONFIG_CRYPTO_LZ4HC=m
CONFIG_CRYPTO_ZSTD=m
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_JITTERENTROPY_OSR=1
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m

View File

@ -49,6 +49,7 @@ CONFIG_ZFCP=y
# CONFIG_HVC_IUCV is not set
# CONFIG_HW_RANDOM_S390 is not set
# CONFIG_HMC_DRV is not set
# CONFIG_S390_UV_UAPI is not set
# CONFIG_S390_TAPE is not set
# CONFIG_VMCP is not set
# CONFIG_MONWRITER is not set

View File

@ -16,8 +16,10 @@
#include <asm/pci_io.h>
#define xlate_dev_mem_ptr xlate_dev_mem_ptr
#define kc_xlate_dev_mem_ptr xlate_dev_mem_ptr
void *xlate_dev_mem_ptr(phys_addr_t phys);
#define unxlate_dev_mem_ptr unxlate_dev_mem_ptr
#define kc_unxlate_dev_mem_ptr unxlate_dev_mem_ptr
void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
#define IO_SPACE_LIMIT 0

View File

@ -49,6 +49,7 @@ struct perf_sf_sde_regs {
};
#define perf_arch_fetch_caller_regs(regs, __ip) do { \
(regs)->psw.mask = 0; \
(regs)->psw.addr = (__ip); \
(regs)->gprs[15] = (unsigned long)__builtin_frame_address(0) - \
offsetof(struct stack_frame, back_chain); \

View File

@ -77,7 +77,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
vcpu->stat.instruction_diagnose_258++;
if (vcpu->run->s.regs.gprs[rx] & 7)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm));
rc = read_guest_real(vcpu, vcpu->run->s.regs.gprs[rx], &parm, sizeof(parm));
if (rc)
return kvm_s390_inject_prog_cond(vcpu, rc);
if (parm.parm_version != 2 || parm.parm_len < 5 || parm.code != 0x258)

View File

@ -828,6 +828,8 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
const gfn_t gfn = gpa_to_gfn(gpa);
int rc;
if (!gfn_to_memslot(kvm, gfn))
return PGM_ADDRESSING;
if (mode == GACC_STORE)
rc = kvm_write_guest_page(kvm, gfn, data, offset, len);
else
@ -985,6 +987,8 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
gra += fragment_len;
data += fragment_len;
}
if (rc > 0)
vcpu->arch.pgm.code = rc;
return rc;
}

View File

@ -405,11 +405,12 @@ int read_guest_abs(struct kvm_vcpu *vcpu, unsigned long gpa, void *data,
* @len: number of bytes to copy
*
* Copy @len bytes from @data (kernel space) to @gra (guest real address).
* It is up to the caller to ensure that the entire guest memory range is
* valid memory before calling this function.
* Guest low address and key protection are not checked.
*
* Returns zero on success or -EFAULT on error.
* Returns zero on success, -EFAULT when copying from @data failed, or
* PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info
* is also stored to allow injecting into the guest (if applicable) using
* kvm_s390_inject_prog_cond().
*
* If an error occurs data may have been copied partially to guest memory.
*/
@ -428,11 +429,12 @@ int write_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
* @len: number of bytes to copy
*
* Copy @len bytes from @gra (guest real address) to @data (kernel space).
* It is up to the caller to ensure that the entire guest memory range is
* valid memory before calling this function.
* Guest key protection is not checked.
*
* Returns zero on success or -EFAULT on error.
* Returns zero on success, -EFAULT when copying to @data failed, or
* PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info
* is also stored to allow injecting into the guest (if applicable) using
* kvm_s390_inject_prog_cond().
*
* If an error occurs data may have been copied partially to kernel space.
*/

View File

@ -280,18 +280,19 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
goto no_pdev;
switch (ccdf->pec) {
case 0x003a: /* Service Action or Error Recovery Successful */
case 0x002a: /* Error event concerns FMB */
case 0x002b:
case 0x002c:
break;
case 0x0040: /* Service Action or Error Recovery Failed */
case 0x003b:
zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
break;
default: /* PCI function left in the error state attempt to recover */
ers_res = zpci_event_attempt_error_recovery(pdev);
if (ers_res != PCI_ERS_RESULT_RECOVERED)
zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
break;
default:
/*
* Mark as frozen not permanently failed because the device
* could be subsequently recovered by the platform.
*/
zpci_event_io_failure(pdev, pci_channel_io_frozen);
break;
}
pci_dev_put(pdev);
no_pdev:

View File

@ -9,6 +9,8 @@
#include <asm/unwind_hints.h>
#include <asm/segment.h>
#include <asm/cache.h>
#include <asm/cpufeatures.h>
#include <asm/nospec-branch.h>
#include "calling.h"
@ -19,6 +21,9 @@ SYM_FUNC_START(entry_ibpb)
movl $PRED_CMD_IBPB, %eax
xorl %edx, %edx
wrmsr
/* Make sure IBPB clears return stack preductions too. */
FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET
RET
SYM_FUNC_END(entry_ibpb)
/* For KVM */

View File

@ -871,6 +871,8 @@ SYM_FUNC_START(entry_SYSENTER_32)
/* Now ready to switch the cr3 */
SWITCH_TO_USER_CR3 scratch_reg=%eax
/* Clobbers ZF */
CLEAR_CPU_BUFFERS
/*
* Restore all flags except IF. (We restore IF separately because
@ -881,7 +883,6 @@ SYM_FUNC_START(entry_SYSENTER_32)
BUG_IF_WRONG_CR3 no_user_check=1
popfl
popl %eax
CLEAR_CPU_BUFFERS
/*
* Return back to the vDSO, which will pop ecx and edx.
@ -1144,7 +1145,6 @@ SYM_CODE_START(asm_exc_nmi)
/* Not on SYSENTER stack. */
call exc_nmi
CLEAR_CPU_BUFFERS
jmp .Lnmi_return
.Lnmi_from_sysenter_stack:
@ -1165,6 +1165,7 @@ SYM_CODE_START(asm_exc_nmi)
CHECK_AND_APPLY_ESPFIX
RESTORE_ALL_NMI cr3_reg=%edi pop=4
CLEAR_CPU_BUFFERS
jmp .Lirq_return
#ifdef CONFIG_X86_ESPFIX32
@ -1206,6 +1207,7 @@ SYM_CODE_START(asm_exc_nmi)
* 1 - orig_ax
*/
lss (1+5+6)*4(%esp), %esp # back to espfix stack
CLEAR_CPU_BUFFERS
jmp .Lirq_return
#endif
SYM_CODE_END(asm_exc_nmi)

View File

@ -215,7 +215,7 @@
#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */
#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */
#define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */
#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier */
#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier without a guaranteed RSB flush */
#define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */
#define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */
#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */
@ -348,6 +348,7 @@
#define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */
#define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */
#define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */
#define X86_FEATURE_AMD_IBPB_RET (13*32+30) /* IBPB clears return address predictor */
#define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */
/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
@ -523,4 +524,5 @@
#define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* "div0" AMD DIV0 speculation bug */
#define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
#define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
#endif /* _ASM_X86_CPUFEATURES_H */

View File

@ -323,7 +323,16 @@
* Note: Only the memory operand variant of VERW clears the CPU buffers.
*/
.macro CLEAR_CPU_BUFFERS
ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
#ifdef CONFIG_X86_64
ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
#else
/*
* In 32bit mode, the memory operand must be a %cs reference. The data
* segments may not be usable (vm86 mode), and the stack segment may not
* be flat (ESPFIX32).
*/
ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
#endif
.endm
#ifdef CONFIG_X86_64

View File

@ -44,6 +44,7 @@
#define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4
#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc
#define PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4 0x12c4
#define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4 0x16fc
#define PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4 0x124c
#define PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4 0x12bc
#define PCI_DEVICE_ID_AMD_MI200_DF_F4 0x14d4
@ -127,6 +128,7 @@ static const struct pci_device_id amd_nb_link_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI200_DF_F4) },

View File

@ -440,7 +440,19 @@ static int lapic_timer_shutdown(struct clock_event_device *evt)
v = apic_read(APIC_LVTT);
v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
apic_write(APIC_LVTT, v);
apic_write(APIC_TMICT, 0);
/*
* Setting APIC_LVT_MASKED (above) should be enough to tell
* the hardware that this timer will never fire. But AMD
* erratum 411 and some Intel CPU behavior circa 2024 say
* otherwise. Time for belt and suspenders programming: mask
* the timer _and_ zero the counter registers:
*/
if (v & APIC_LVT_TIMER_TSCDEADLINE)
wrmsrl(MSR_IA32_TSC_DEADLINE, 0);
else
apic_write(APIC_TMICT, 0);
return 0;
}

View File

@ -1202,5 +1202,6 @@ void amd_check_microcode(void)
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
return;
on_each_cpu(zenbleed_check_cpu, NULL, 1);
if (cpu_feature_enabled(X86_FEATURE_ZEN2))
on_each_cpu(zenbleed_check_cpu, NULL, 1);
}

View File

@ -1115,8 +1115,25 @@ do_cmd_auto:
case RETBLEED_MITIGATION_IBPB:
setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
/*
* IBPB on entry already obviates the need for
* software-based untraining so clear those in case some
* other mitigation like SRSO has selected them.
*/
setup_clear_cpu_cap(X86_FEATURE_UNRET);
setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
mitigate_smt = true;
/*
* There is no need for RSB filling: entry_ibpb() ensures
* all predictions, including the RSB, are invalidated,
* regardless of IBPB implementation.
*/
setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
break;
case RETBLEED_MITIGATION_STUFF:
@ -2627,6 +2644,14 @@ static void __init srso_select_mitigation(void)
if (has_microcode) {
setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
srso_mitigation = SRSO_MITIGATION_IBPB;
/*
* IBPB on entry already obviates the need for
* software-based untraining so clear those in case some
* other mitigation like Retbleed has selected them.
*/
setup_clear_cpu_cap(X86_FEATURE_UNRET);
setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
}
} else {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
@ -2638,6 +2663,13 @@ static void __init srso_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
/*
* There is no need for RSB filling: entry_ibpb() ensures
* all predictions, including the RSB, are invalidated,
* regardless of IBPB implementation.
*/
setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
}
} else {
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");

View File

@ -1443,6 +1443,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
boot_cpu_has(X86_FEATURE_HYPERVISOR)))
setup_force_cpu_bug(X86_BUG_BHI);
if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET))
setup_force_cpu_bug(X86_BUG_IBPB_NO_RET);
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;

View File

@ -207,7 +207,7 @@ static inline bool rdt_get_mb_table(struct rdt_resource *r)
return false;
}
static bool __get_mem_config_intel(struct rdt_resource *r)
static __init bool __get_mem_config_intel(struct rdt_resource *r)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
union cpuid_0x10_3_eax eax;
@ -241,7 +241,7 @@ static bool __get_mem_config_intel(struct rdt_resource *r)
return true;
}
static bool __rdt_get_mem_config_amd(struct rdt_resource *r)
static __init bool __rdt_get_mem_config_amd(struct rdt_resource *r)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
u32 eax, ebx, ecx, edx, subleaf;

View File

@ -29,10 +29,10 @@
* hardware. The allocated bandwidth percentage is rounded to the next
* control step available on the hardware.
*/
static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
static bool bw_validate(char *buf, u32 *data, struct rdt_resource *r)
{
unsigned long bw;
int ret;
u32 bw;
/*
* Only linear delay values is supported for current Intel SKUs.
@ -42,16 +42,21 @@ static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
return false;
}
ret = kstrtoul(buf, 10, &bw);
ret = kstrtou32(buf, 10, &bw);
if (ret) {
rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf);
rdt_last_cmd_printf("Invalid MB value %s\n", buf);
return false;
}
if ((bw < r->membw.min_bw || bw > r->default_ctrl) &&
!is_mba_sc(r)) {
rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", bw,
r->membw.min_bw, r->default_ctrl);
/* Nothing else to do if software controller is enabled. */
if (is_mba_sc(r)) {
*data = bw;
return true;
}
if (bw < r->membw.min_bw || bw > r->default_ctrl) {
rdt_last_cmd_printf("MB value %u out of range [%d,%d]\n",
bw, r->membw.min_bw, r->default_ctrl);
return false;
}
@ -65,7 +70,7 @@ int parse_bw(struct rdt_parse_data *data, struct resctrl_schema *s,
struct resctrl_staged_config *cfg;
u32 closid = data->rdtgrp->closid;
struct rdt_resource *r = s->res;
unsigned long bw_val;
u32 bw_val;
cfg = &d->staged_config[s->conf_type];
if (cfg->have_new_ctrl) {

View File

@ -1032,6 +1032,10 @@ static u64 xen_do_read_msr(unsigned int msr, int *err)
switch (msr) {
case MSR_IA32_APICBASE:
val &= ~X2APIC_ENABLE;
if (smp_processor_id() == 0)
val |= MSR_IA32_APICBASE_BSP;
else
val &= ~MSR_IA32_APICBASE_BSP;
break;
}
return val;

View File

@ -4310,6 +4310,12 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
/* mark the queue as mq asap */
q->mq_ops = set->ops;
/*
* ->tag_set has to be setup before initialize hctx, which cpuphp
* handler needs it for checking queue mapping
*/
q->tag_set = set;
if (blk_mq_alloc_ctxs(q))
goto err_exit;
@ -4328,8 +4334,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
q->tag_set = set;
q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT;
INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work);

View File

@ -219,8 +219,8 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr,
data->got_token = true;
smp_wmb();
list_del_init(&curr->entry);
wake_up_process(data->task);
list_del_init_careful(&curr->entry);
return 1;
}

View File

@ -106,8 +106,7 @@ static struct elevator_type *__elevator_find(const char *name)
return NULL;
}
static struct elevator_type *elevator_find_get(struct request_queue *q,
const char *name)
static struct elevator_type *elevator_find_get(const char *name)
{
struct elevator_type *e;
@ -551,7 +550,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
static inline bool elv_support_iosched(struct request_queue *q)
{
if (!queue_is_mq(q) ||
(q->tag_set && (q->tag_set->flags & BLK_MQ_F_NO_SCHED)))
(q->tag_set->flags & BLK_MQ_F_NO_SCHED))
return false;
return true;
}
@ -562,14 +561,14 @@ static inline bool elv_support_iosched(struct request_queue *q)
*/
static struct elevator_type *elevator_get_default(struct request_queue *q)
{
if (q->tag_set && q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT)
if (q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT)
return NULL;
if (q->nr_hw_queues != 1 &&
!blk_mq_is_shared_tags(q->tag_set->flags))
return NULL;
return elevator_find_get(q, "mq-deadline");
return elevator_find_get("mq-deadline");
}
/*
@ -697,7 +696,7 @@ static int elevator_change(struct request_queue *q, const char *elevator_name)
if (q->elevator && elevator_match(q->elevator->type, elevator_name))
return 0;
e = elevator_find_get(q, elevator_name);
e = elevator_find_get(elevator_name);
if (!e)
return -EINVAL;
ret = elevator_switch(q, e);
@ -709,13 +708,21 @@ int elv_iosched_load_module(struct gendisk *disk, const char *buf,
size_t count)
{
char elevator_name[ELV_NAME_MAX];
struct elevator_type *found;
const char *name;
if (!elv_support_iosched(disk->queue))
return -EOPNOTSUPP;
strscpy(elevator_name, buf, sizeof(elevator_name));
name = strstrip(elevator_name);
request_module("%s-iosched", strstrip(elevator_name));
spin_lock(&elv_list_lock);
found = __elevator_find(name);
spin_unlock(&elv_list_lock);
if (!found)
request_module("%s-iosched", name);
return 0;
}

View File

@ -373,7 +373,7 @@ found:
q->cra_flags |= CRYPTO_ALG_DEAD;
alg = test->adult;
if (list_empty(&alg->cra_list))
if (crypto_is_dead(alg))
goto complete;
if (err == -ECANCELED)

View File

@ -1940,7 +1940,7 @@ static int __alg_test_hash(const struct hash_testvec *vecs,
atfm = crypto_alloc_ahash(driver, type, mask);
if (IS_ERR(atfm)) {
if (PTR_ERR(atfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: hash: failed to allocate transform for %s: %ld\n",
driver, PTR_ERR(atfm));
return PTR_ERR(atfm);
@ -2706,7 +2706,7 @@ static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
tfm = crypto_alloc_aead(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: aead: failed to allocate transform for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -3285,7 +3285,7 @@ static int alg_test_skcipher(const struct alg_test_desc *desc,
tfm = crypto_alloc_skcipher(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: skcipher: failed to allocate transform for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -3700,7 +3700,7 @@ static int alg_test_cipher(const struct alg_test_desc *desc,
tfm = crypto_alloc_cipher(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
printk(KERN_ERR "alg: cipher: Failed to load transform for "
"%s: %ld\n", driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -3726,7 +3726,7 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
acomp = crypto_alloc_acomp(driver, type, mask);
if (IS_ERR(acomp)) {
if (PTR_ERR(acomp) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(acomp));
return PTR_ERR(acomp);
@ -3740,7 +3740,7 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
comp = crypto_alloc_comp(driver, type, mask);
if (IS_ERR(comp)) {
if (PTR_ERR(comp) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: comp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(comp));
return PTR_ERR(comp);
@ -3818,7 +3818,7 @@ static int alg_test_cprng(const struct alg_test_desc *desc, const char *driver,
rng = crypto_alloc_rng(driver, type, mask);
if (IS_ERR(rng)) {
if (PTR_ERR(rng) == -ENOENT)
return -ENOENT;
return 0;
printk(KERN_ERR "alg: cprng: Failed to load transform for %s: "
"%ld\n", driver, PTR_ERR(rng));
return PTR_ERR(rng);
@ -3846,12 +3846,11 @@ static int drbg_cavs_test(const struct drbg_testvec *test, int pr,
drng = crypto_alloc_rng(driver, type, mask);
if (IS_ERR(drng)) {
kfree_sensitive(buf);
if (PTR_ERR(drng) == -ENOENT)
goto out_no_rng;
return 0;
printk(KERN_ERR "alg: drbg: could not allocate DRNG handle for "
"%s\n", driver);
out_no_rng:
kfree_sensitive(buf);
return PTR_ERR(drng);
}
@ -4095,7 +4094,7 @@ static int alg_test_kpp(const struct alg_test_desc *desc, const char *driver,
tfm = crypto_alloc_kpp(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: kpp: Failed to load tfm for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
@ -4325,7 +4324,7 @@ static int alg_test_akcipher(const struct alg_test_desc *desc,
tfm = crypto_alloc_akcipher(driver, type, mask);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT)
return -ENOENT;
return 0;
pr_err("alg: akcipher: Failed to load tfm for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);

View File

@ -496,7 +496,7 @@ static int encode_addr_size_pairs(struct dma_xfer *xfer, struct wrapper_list *wr
nents = sgt->nents;
nents_dma = nents;
*size = QAIC_MANAGE_EXT_MSG_LENGTH - msg_hdr_len - sizeof(**out_trans);
for_each_sgtable_sg(sgt, sg, i) {
for_each_sgtable_dma_sg(sgt, sg, i) {
*size -= sizeof(*asp);
/* Save 1K for possible follow-up transactions. */
if (*size < SZ_1K) {

View File

@ -184,7 +184,7 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
nents = 0;
size = size ? size : PAGE_SIZE;
for (sg = sgt_in->sgl; sg; sg = sg_next(sg)) {
for_each_sgtable_dma_sg(sgt_in, sg, j) {
len = sg_dma_len(sg);
if (!len)
@ -221,7 +221,7 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
/* copy relevant sg node and fix page and length */
sgn = sgf;
for_each_sgtable_sg(sgt, sg, j) {
for_each_sgtable_dma_sg(sgt, sg, j) {
memcpy(sg, sgn, sizeof(*sg));
if (sgn == sgf) {
sg_dma_address(sg) += offf;
@ -301,7 +301,7 @@ static int encode_reqs(struct qaic_device *qdev, struct bo_slice *slice,
* fence.
*/
dev_addr = req->dev_addr;
for_each_sgtable_sg(slice->sgt, sg, i) {
for_each_sgtable_dma_sg(slice->sgt, sg, i) {
slice->reqs[i].cmd = cmd;
slice->reqs[i].src_addr = cpu_to_le64(slice->dir == DMA_TO_DEVICE ?
sg_dma_address(sg) : dev_addr);

View File

@ -448,73 +448,31 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
},
},
{
/* Asus ExpertBook B1402CBA */
/* Asus ExpertBook B1402C* */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B1402CBA"),
DMI_MATCH(DMI_BOARD_NAME, "B1402C"),
},
},
{
/* Asus ExpertBook B1402CVA */
/* Asus ExpertBook B1502C* */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B1402CVA"),
DMI_MATCH(DMI_BOARD_NAME, "B1502C"),
},
},
{
/* Asus ExpertBook B1502CBA */
/* Asus ExpertBook B2402 (B2402CBA / B2402FBA / B2402CVA / B2402FVA) */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B1502CBA"),
DMI_MATCH(DMI_BOARD_NAME, "B2402"),
},
},
{
/* Asus ExpertBook B1502CGA */
/* Asus ExpertBook B2502 (B2502CBA / B2502FBA / B2502CVA / B2502FVA) */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B1502CGA"),
},
},
{
/* Asus ExpertBook B1502CVA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B1502CVA"),
},
},
{
/* Asus ExpertBook B2402CBA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B2402CBA"),
},
},
{
/* Asus ExpertBook B2402FBA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B2402FBA"),
},
},
{
/* Asus ExpertBook B2502 */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B2502CBA"),
},
},
{
/* Asus ExpertBook B2502FBA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B2502FBA"),
},
},
{
/* Asus ExpertBook B2502CVA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B2502CVA"),
DMI_MATCH(DMI_BOARD_NAME, "B2502"),
},
},
{
@ -532,24 +490,10 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
},
},
{
/* Asus Vivobook Pro N6506MV */
/* Asus Vivobook Pro N6506M* */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "N6506MV"),
},
},
{
/* Asus Vivobook Pro N6506MU */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "N6506MU"),
},
},
{
/* Asus Vivobook Pro N6506MJ */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "N6506MJ"),
DMI_MATCH(DMI_BOARD_NAME, "N6506M"),
},
},
{

View File

@ -4099,10 +4099,20 @@ static void ata_eh_handle_port_suspend(struct ata_port *ap)
WARN_ON(ap->pflags & ATA_PFLAG_SUSPENDED);
/* Set all devices attached to the port in standby mode */
ata_for_each_link(link, ap, HOST_FIRST) {
ata_for_each_dev(dev, link, ENABLED)
ata_dev_power_set_standby(dev);
/*
* We will reach this point for all of the PM events:
* PM_EVENT_SUSPEND (if runtime pm, PM_EVENT_AUTO will also be set)
* PM_EVENT_FREEZE, and PM_EVENT_HIBERNATE.
*
* We do not want to perform disk spin down for PM_EVENT_FREEZE.
* (Spin down will be performed by the subsequent PM_EVENT_HIBERNATE.)
*/
if (!(ap->pm_mesg.event & PM_EVENT_FREEZE)) {
/* Set all devices attached to the port in standby mode */
ata_for_each_link(link, ap, HOST_FIRST) {
ata_for_each_dev(dev, link, ENABLED)
ata_dev_power_set_standby(dev);
}
}
/*

View File

@ -195,6 +195,7 @@ int dev_pm_domain_attach_list(struct device *dev,
struct device *pd_dev = NULL;
int ret, i, num_pds = 0;
bool by_id = true;
size_t size;
u32 pd_flags = data ? data->pd_flags : 0;
u32 link_flags = pd_flags & PD_FLAG_NO_DEV_LINK ? 0 :
DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME;
@ -217,19 +218,17 @@ int dev_pm_domain_attach_list(struct device *dev,
if (num_pds <= 0)
return 0;
pds = devm_kzalloc(dev, sizeof(*pds), GFP_KERNEL);
pds = kzalloc(sizeof(*pds), GFP_KERNEL);
if (!pds)
return -ENOMEM;
pds->pd_devs = devm_kcalloc(dev, num_pds, sizeof(*pds->pd_devs),
GFP_KERNEL);
if (!pds->pd_devs)
return -ENOMEM;
pds->pd_links = devm_kcalloc(dev, num_pds, sizeof(*pds->pd_links),
GFP_KERNEL);
if (!pds->pd_links)
return -ENOMEM;
size = sizeof(*pds->pd_devs) + sizeof(*pds->pd_links);
pds->pd_devs = kcalloc(num_pds, size, GFP_KERNEL);
if (!pds->pd_devs) {
ret = -ENOMEM;
goto free_pds;
}
pds->pd_links = (void *)(pds->pd_devs + num_pds);
if (link_flags && pd_flags & PD_FLAG_DEV_LINK_ON)
link_flags |= DL_FLAG_RPM_ACTIVE;
@ -272,6 +271,9 @@ err_attach:
device_link_del(pds->pd_links[i]);
dev_pm_domain_detach(pds->pd_devs[i], true);
}
kfree(pds->pd_devs);
free_pds:
kfree(pds);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_domain_attach_list);
@ -363,6 +365,9 @@ void dev_pm_domain_detach_list(struct dev_pm_domain_list *list)
device_link_del(list->pd_links[i]);
dev_pm_domain_detach(list->pd_devs[i], true);
}
kfree(list->pd_devs);
kfree(list);
}
EXPORT_SYMBOL_GPL(dev_pm_domain_detach_list);

View File

@ -1364,7 +1364,6 @@ extern struct bio_set drbd_io_bio_set;
extern struct mutex resources_mutex;
extern int conn_lowest_minor(struct drbd_connection *connection);
extern enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor);
extern void drbd_destroy_device(struct kref *kref);
extern void drbd_delete_device(struct drbd_device *device);

View File

@ -471,20 +471,6 @@ void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait)
wait_for_completion(&thi->stop);
}
int conn_lowest_minor(struct drbd_connection *connection)
{
struct drbd_peer_device *peer_device;
int vnr = 0, minor = -1;
rcu_read_lock();
peer_device = idr_get_next(&connection->peer_devices, &vnr);
if (peer_device)
minor = device_to_minor(peer_device->device);
rcu_read_unlock();
return minor;
}
#ifdef CONFIG_SMP
/*
* drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs

View File

@ -2380,10 +2380,19 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
* TODO: provide forward progress for RECOVERY handler, so that
* unprivileged device can benefit from it
*/
if (info.flags & UBLK_F_UNPRIVILEGED_DEV)
if (info.flags & UBLK_F_UNPRIVILEGED_DEV) {
info.flags &= ~(UBLK_F_USER_RECOVERY_REISSUE |
UBLK_F_USER_RECOVERY);
/*
* For USER_COPY, we depends on userspace to fill request
* buffer by pwrite() to ublk char device, which can't be
* used for unprivileged device
*/
if (info.flags & UBLK_F_USER_COPY)
return -EINVAL;
}
/* the created device is always owned by current user */
ublk_store_owner_uid_gid(&info.owner_uid, &info.owner_gid);

View File

@ -1345,10 +1345,15 @@ static int btusb_submit_intr_urb(struct hci_dev *hdev, gfp_t mem_flags)
if (!urb)
return -ENOMEM;
/* Use maximum HCI Event size so the USB stack handles
* ZPL/short-transfer automatically.
*/
size = HCI_MAX_EVENT_SIZE;
if (le16_to_cpu(data->udev->descriptor.idVendor) == 0x0a12 &&
le16_to_cpu(data->udev->descriptor.idProduct) == 0x0001)
/* Fake CSR devices don't seem to support sort-transter */
size = le16_to_cpu(data->intr_ep->wMaxPacketSize);
else
/* Use maximum HCI Event size so the USB stack handles
* ZPL/short-transfer automatically.
*/
size = HCI_MAX_EVENT_SIZE;
buf = kmalloc(size, mem_flags);
if (!buf) {
@ -4041,8 +4046,10 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
BT_DBG("intf %p", intf);
/* Don't suspend if there are connections */
if (hci_conn_count(data->hdev))
/* Don't auto-suspend if there are connections; external suspend calls
* shall never fail.
*/
if (PMSG_IS_AUTO(message) && hci_conn_count(data->hdev))
return -EBUSY;
if (data->suspend_count++)

View File

@ -2313,7 +2313,7 @@ static int cdrom_ioctl_media_changed(struct cdrom_device_info *cdi,
return -EINVAL;
/* Prevent arg from speculatively bypassing the length check */
barrier_nospec();
arg = array_index_nospec(arg, cdi->capacity);
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)

View File

@ -2006,25 +2006,27 @@ static int virtcons_probe(struct virtio_device *vdev)
multiport = true;
}
err = init_vqs(portdev);
if (err < 0) {
dev_err(&vdev->dev, "Error %d initializing vqs\n", err);
goto free_chrdev;
}
spin_lock_init(&portdev->ports_lock);
INIT_LIST_HEAD(&portdev->ports);
INIT_LIST_HEAD(&portdev->list);
virtio_device_ready(portdev->vdev);
INIT_WORK(&portdev->config_work, &config_work_handler);
INIT_WORK(&portdev->control_work, &control_work_handler);
if (multiport) {
spin_lock_init(&portdev->c_ivq_lock);
spin_lock_init(&portdev->c_ovq_lock);
}
err = init_vqs(portdev);
if (err < 0) {
dev_err(&vdev->dev, "Error %d initializing vqs\n", err);
goto free_chrdev;
}
virtio_device_ready(portdev->vdev);
if (multiport) {
err = fill_queue(portdev->c_ivq, &portdev->c_ivq_lock);
if (err < 0) {
dev_err(&vdev->dev,

View File

@ -473,7 +473,7 @@ clk_multiple_parents_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ctx->parents_ctx[0].rate = DUMMY_CLOCK_RATE_1;
ret = clk_hw_register(NULL, &ctx->parents_ctx[0].hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[0].hw);
if (ret)
return ret;
@ -481,7 +481,7 @@ clk_multiple_parents_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ctx->parents_ctx[1].rate = DUMMY_CLOCK_RATE_2;
ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[1].hw);
if (ret)
return ret;
@ -489,23 +489,13 @@ clk_multiple_parents_mux_test_init(struct kunit *test)
ctx->hw.init = CLK_HW_INIT_PARENTS("test-mux", parents,
&clk_multiple_parents_mux_ops,
CLK_SET_RATE_PARENT);
ret = clk_hw_register(NULL, &ctx->hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->hw);
if (ret)
return ret;
return 0;
}
static void
clk_multiple_parents_mux_test_exit(struct kunit *test)
{
struct clk_multiple_parent_ctx *ctx = test->priv;
clk_hw_unregister(&ctx->hw);
clk_hw_unregister(&ctx->parents_ctx[0].hw);
clk_hw_unregister(&ctx->parents_ctx[1].hw);
}
/*
* Test that for a clock with multiple parents, clk_get_parent()
* actually returns the current one.
@ -561,18 +551,18 @@ clk_test_multiple_parents_mux_set_range_set_parent_get_rate(struct kunit *test)
{
struct clk_multiple_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent1, *parent2;
unsigned long rate;
int ret;
kunit_skip(test, "This needs to be fixed in the core.");
parent1 = clk_hw_get_clk(&ctx->parents_ctx[0].hw, NULL);
parent1 = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[0].hw, NULL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent1);
KUNIT_ASSERT_TRUE(test, clk_is_match(clk_get_parent(clk), parent1));
parent2 = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
parent2 = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[1].hw, NULL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent2);
ret = clk_set_rate(parent1, DUMMY_CLOCK_RATE_1);
@ -593,10 +583,6 @@ clk_test_multiple_parents_mux_set_range_set_parent_get_rate(struct kunit *test)
KUNIT_ASSERT_GT(test, rate, 0);
KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 - 1000);
KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
clk_put(parent2);
clk_put(parent1);
clk_put(clk);
}
static struct kunit_case clk_multiple_parents_mux_test_cases[] = {
@ -617,7 +603,6 @@ static struct kunit_suite
clk_multiple_parents_mux_test_suite = {
.name = "clk-multiple-parents-mux-test",
.init = clk_multiple_parents_mux_test_init,
.exit = clk_multiple_parents_mux_test_exit,
.test_cases = clk_multiple_parents_mux_test_cases,
};
@ -637,29 +622,20 @@ clk_orphan_transparent_multiple_parent_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ctx->parents_ctx[1].rate = DUMMY_CLOCK_INIT_RATE;
ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[1].hw);
if (ret)
return ret;
ctx->hw.init = CLK_HW_INIT_PARENTS("test-orphan-mux", parents,
&clk_multiple_parents_mux_ops,
CLK_SET_RATE_PARENT);
ret = clk_hw_register(NULL, &ctx->hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->hw);
if (ret)
return ret;
return 0;
}
static void
clk_orphan_transparent_multiple_parent_mux_test_exit(struct kunit *test)
{
struct clk_multiple_parent_ctx *ctx = test->priv;
clk_hw_unregister(&ctx->hw);
clk_hw_unregister(&ctx->parents_ctx[1].hw);
}
/*
* Test that, for a mux whose current parent hasn't been registered yet and is
* thus orphan, clk_get_parent() will return NULL.
@ -912,7 +888,7 @@ clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(st
{
struct clk_multiple_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent;
unsigned long rate;
int ret;
@ -921,7 +897,7 @@ clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(st
clk_hw_set_rate_range(hw, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
parent = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[1].hw, NULL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
ret = clk_set_parent(clk, parent);
@ -931,9 +907,6 @@ clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(st
KUNIT_ASSERT_GT(test, rate, 0);
KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
clk_put(parent);
clk_put(clk);
}
static struct kunit_case clk_orphan_transparent_multiple_parent_mux_test_cases[] = {
@ -961,7 +934,6 @@ static struct kunit_case clk_orphan_transparent_multiple_parent_mux_test_cases[]
static struct kunit_suite clk_orphan_transparent_multiple_parent_mux_test_suite = {
.name = "clk-orphan-transparent-multiple-parent-mux-test",
.init = clk_orphan_transparent_multiple_parent_mux_test_init,
.exit = clk_orphan_transparent_multiple_parent_mux_test_exit,
.test_cases = clk_orphan_transparent_multiple_parent_mux_test_cases,
};
@ -986,7 +958,7 @@ static int clk_single_parent_mux_test_init(struct kunit *test)
&clk_dummy_rate_ops,
0);
ret = clk_hw_register(NULL, &ctx->parent_ctx.hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->parent_ctx.hw);
if (ret)
return ret;
@ -994,7 +966,7 @@ static int clk_single_parent_mux_test_init(struct kunit *test)
&clk_dummy_single_parent_ops,
CLK_SET_RATE_PARENT);
ret = clk_hw_register(NULL, &ctx->hw);
ret = clk_hw_register_kunit(test, NULL, &ctx->hw);
if (ret)
return ret;
@ -1060,7 +1032,7 @@ clk_test_single_parent_mux_set_range_disjoint_child_last(struct kunit *test)
{
struct clk_single_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent;
int ret;
@ -1074,8 +1046,6 @@ clk_test_single_parent_mux_set_range_disjoint_child_last(struct kunit *test)
ret = clk_set_rate_range(clk, 3000, 4000);
KUNIT_EXPECT_LT(test, ret, 0);
clk_put(clk);
}
/*
@ -1092,7 +1062,7 @@ clk_test_single_parent_mux_set_range_disjoint_parent_last(struct kunit *test)
{
struct clk_single_parent_ctx *ctx = test->priv;
struct clk_hw *hw = &ctx->hw;
struct clk *clk = clk_hw_get_clk(hw, NULL);
struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL);
struct clk *parent;
int ret;
@ -1106,8 +1076,6 @@ clk_test_single_parent_mux_set_range_disjoint_parent_last(struct kunit *test)
ret = clk_set_rate_range(parent, 3000, 4000);
KUNIT_EXPECT_LT(test, ret, 0);
clk_put(clk);
}
/*
@ -1238,7 +1206,6 @@ static struct kunit_suite
clk_single_parent_mux_test_suite = {
.name = "clk-single-parent-mux-test",
.init = clk_single_parent_mux_test_init,
.exit = clk_single_parent_mux_test_exit,
.test_cases = clk_single_parent_mux_test_cases,
};

View File

@ -439,7 +439,7 @@ unsigned long rockchip_clk_find_max_clk_id(struct rockchip_clk_branch *list,
if (list->id > max)
max = list->id;
if (list->child && list->child->id > max)
max = list->id;
max = list->child->id;
}
return max;

View File

@ -1155,6 +1155,7 @@ static const struct of_device_id exynosautov920_cmu_of_match[] = {
.compatible = "samsung,exynosautov920-cmu-peric0",
.data = &peric0_cmu_info,
},
{ }
};
static struct platform_driver exynosautov920_cmu_driver __refdata = {

View File

@ -536,11 +536,16 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy)
static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
{
u32 max_limit_perf, min_limit_perf, lowest_perf;
u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf;
struct amd_cpudata *cpudata = policy->driver_data;
max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq);
min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq);
if (cpudata->boost_supported && !policy->boost_enabled)
max_perf = READ_ONCE(cpudata->nominal_perf);
else
max_perf = READ_ONCE(cpudata->highest_perf);
max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq);
min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq);
lowest_perf = READ_ONCE(cpudata->lowest_perf);
if (min_limit_perf < lowest_perf)
@ -1201,11 +1206,21 @@ static int amd_pstate_register_driver(int mode)
return -EINVAL;
cppc_state = mode;
ret = amd_pstate_enable(true);
if (ret) {
pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n",
ret);
amd_pstate_driver_cleanup();
return ret;
}
ret = cpufreq_register_driver(current_pstate_driver);
if (ret) {
amd_pstate_driver_cleanup();
return ret;
}
return 0;
}
@ -1496,10 +1511,13 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
u64 value;
s16 epp;
max_perf = READ_ONCE(cpudata->highest_perf);
if (cpudata->boost_supported && !policy->boost_enabled)
max_perf = READ_ONCE(cpudata->nominal_perf);
else
max_perf = READ_ONCE(cpudata->highest_perf);
min_perf = READ_ONCE(cpudata->lowest_perf);
max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq);
min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq);
max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq);
min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq);
if (min_limit_perf < min_perf)
min_limit_perf = min_perf;

View File

@ -947,7 +947,7 @@ struct ahash_alg mv_md5_alg = {
.base = {
.cra_name = "md5",
.cra_driver_name = "mv-md5",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1018,7 +1018,7 @@ struct ahash_alg mv_sha1_alg = {
.base = {
.cra_name = "sha1",
.cra_driver_name = "mv-sha1",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1092,7 +1092,7 @@ struct ahash_alg mv_sha256_alg = {
.base = {
.cra_name = "sha256",
.cra_driver_name = "mv-sha256",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1302,7 +1302,7 @@ struct ahash_alg mv_ahmac_md5_alg = {
.base = {
.cra_name = "hmac(md5)",
.cra_driver_name = "mv-hmac-md5",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1373,7 +1373,7 @@ struct ahash_alg mv_ahmac_sha1_alg = {
.base = {
.cra_name = "hmac(sha1)",
.cra_driver_name = "mv-hmac-sha1",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,
@ -1444,7 +1444,7 @@ struct ahash_alg mv_ahmac_sha256_alg = {
.base = {
.cra_name = "hmac(sha256)",
.cra_driver_name = "mv-hmac-sha256",
.cra_priority = 300,
.cra_priority = 0,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY,

View File

@ -86,7 +86,7 @@ static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn,
nr_pages = 1;
pgoff = linear_page_index(vmf->vma,
ALIGN(vmf->address, fault_size));
ALIGN_DOWN(vmf->address, fault_size));
for (i = 0; i < nr_pages; i++) {
struct page *page = pfn_to_page(pfn_t_to_pfn(pfn) + i);

View File

@ -1391,11 +1391,12 @@ static struct ep93xx_dma_engine *ep93xx_dma_of_probe(struct platform_device *pde
INIT_LIST_HEAD(&dma_dev->channels);
for (i = 0; i < edma->num_channels; i++) {
struct ep93xx_dma_chan *edmac = &edma->channels[i];
int len;
edmac->chan.device = dma_dev;
edmac->regs = devm_platform_ioremap_resource(pdev, i);
if (IS_ERR(edmac->regs))
return edmac->regs;
return ERR_CAST(edmac->regs);
edmac->irq = fwnode_irq_get(dev_fwnode(dev), i);
if (edmac->irq < 0)
@ -1404,9 +1405,11 @@ static struct ep93xx_dma_engine *ep93xx_dma_of_probe(struct platform_device *pde
edmac->edma = edma;
if (edma->m2m)
snprintf(dma_clk_name, sizeof(dma_clk_name), "m2m%u", i);
len = snprintf(dma_clk_name, sizeof(dma_clk_name), "m2m%u", i);
else
snprintf(dma_clk_name, sizeof(dma_clk_name), "m2p%u", i);
len = snprintf(dma_clk_name, sizeof(dma_clk_name), "m2p%u", i);
if (len >= sizeof(dma_clk_name))
return ERR_PTR(-ENOBUFS);
edmac->clk = devm_clk_get(dev, dma_clk_name);
if (IS_ERR(edmac->clk)) {

View File

@ -481,11 +481,16 @@ static int ffa_msg_send_direct_req2(u16 src_id, u16 dst_id, const uuid_t *uuid,
struct ffa_send_direct_data2 *data)
{
u32 src_dst_ids = PACK_TARGET_INFO(src_id, dst_id);
union {
uuid_t uuid;
__le64 regs[2];
} uuid_regs = { .uuid = *uuid };
ffa_value_t ret, args = {
.a0 = FFA_MSG_SEND_DIRECT_REQ2, .a1 = src_dst_ids,
.a0 = FFA_MSG_SEND_DIRECT_REQ2,
.a1 = src_dst_ids,
.a2 = le64_to_cpu(uuid_regs.regs[0]),
.a3 = le64_to_cpu(uuid_regs.regs[1]),
};
export_uuid((u8 *)&args.a2, uuid);
memcpy((void *)&args + offsetof(ffa_value_t, a4), data, sizeof(*data));
invoke_ffa_fn(args, &ret);
@ -496,7 +501,7 @@ static int ffa_msg_send_direct_req2(u16 src_id, u16 dst_id, const uuid_t *uuid,
return ffa_to_linux_errno((int)ret.a2);
if (ret.a0 == FFA_MSG_SEND_DIRECT_RESP2) {
memcpy(data, &ret.a4, sizeof(*data));
memcpy(data, (void *)&ret + offsetof(ffa_value_t, a4), sizeof(*data));
return 0;
}

View File

@ -2976,10 +2976,8 @@ static struct scmi_debug_info *scmi_debugfs_common_setup(struct scmi_info *info)
dbg->top_dentry = top_dentry;
if (devm_add_action_or_reset(info->dev,
scmi_debugfs_common_cleanup, dbg)) {
scmi_debugfs_common_cleanup(dbg);
scmi_debugfs_common_cleanup, dbg))
return NULL;
}
return dbg;
}

View File

@ -1,8 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
scmi_transport_mailbox-objs := mailbox.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o
# Keep before scmi_transport_mailbox.o to allow precedence
# while matching the compatible.
scmi_transport_smc-objs := smc.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += scmi_transport_smc.o
scmi_transport_mailbox-objs := mailbox.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o
scmi_transport_optee-objs := optee.o
obj-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += scmi_transport_optee.o
scmi_transport_virtio-objs := virtio.o

View File

@ -25,6 +25,7 @@
* @chan_platform_receiver: Optional Platform Receiver mailbox unidirectional channel
* @cinfo: SCMI channel info
* @shmem: Transmit/Receive shared memory area
* @chan_lock: Lock that prevents multiple xfers from being queued
*/
struct scmi_mailbox {
struct mbox_client cl;
@ -33,6 +34,7 @@ struct scmi_mailbox {
struct mbox_chan *chan_platform_receiver;
struct scmi_chan_info *cinfo;
struct scmi_shared_mem __iomem *shmem;
struct mutex chan_lock;
};
#define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl)
@ -238,6 +240,7 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
cinfo->transport_info = smbox;
smbox->cinfo = cinfo;
mutex_init(&smbox->chan_lock);
return 0;
}
@ -267,13 +270,23 @@ static int mailbox_send_message(struct scmi_chan_info *cinfo,
struct scmi_mailbox *smbox = cinfo->transport_info;
int ret;
/*
* The mailbox layer has its own queue. However the mailbox queue
* confuses the per message SCMI timeouts since the clock starts when
* the message is submitted into the mailbox queue. So when multiple
* messages are queued up the clock starts on all messages instead of
* only the one inflight.
*/
mutex_lock(&smbox->chan_lock);
ret = mbox_send_message(smbox->chan, xfer);
/* mbox_send_message returns non-negative value on success */
if (ret < 0) {
mutex_unlock(&smbox->chan_lock);
return ret;
}
/* mbox_send_message returns non-negative value on success, so reset */
if (ret > 0)
ret = 0;
return ret;
return 0;
}
static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,
@ -281,13 +294,10 @@ static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,
{
struct scmi_mailbox *smbox = cinfo->transport_info;
/*
* NOTE: we might prefer not to need the mailbox ticker to manage the
* transfer queueing since the protocol layer queues things by itself.
* Unfortunately, we have to kick the mailbox framework after we have
* received our message.
*/
mbox_client_txdone(smbox->chan, ret);
/* Release channel */
mutex_unlock(&smbox->chan_lock);
}
static void mailbox_fetch_response(struct scmi_chan_info *cinfo,

View File

@ -406,6 +406,8 @@ static void __aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
gpio->dcache[GPIO_BANK(offset)] = reg;
iowrite32(reg, addr);
/* Flush write */
ioread32(addr);
}
static void aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
@ -1191,7 +1193,7 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev)
if (!gpio_id)
return -EINVAL;
gpio->clk = of_clk_get(pdev->dev.of_node, 0);
gpio->clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(gpio->clk)) {
dev_warn(&pdev->dev,
"Failed to get clock from devicetree, debouncing disabled\n");

View File

@ -1439,8 +1439,8 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
list_add_tail(&vm->vm_list_node,
&(vm->process_info->vm_list_head));
vm->process_info->n_vms++;
*ef = dma_fence_get(&vm->process_info->eviction_fence->base);
if (ef)
*ef = dma_fence_get(&vm->process_info->eviction_fence->base);
mutex_unlock(&vm->process_info->lock);
return 0;

View File

@ -265,7 +265,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
/* Only a single BO list is allowed to simplify handling. */
if (p->bo_list)
ret = -EINVAL;
goto free_partial_kdata;
ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
if (ret)

View File

@ -1635,11 +1635,9 @@ int amdgpu_gfx_sysfs_isolation_shader_init(struct amdgpu_device *adev)
{
int r;
if (!amdgpu_sriov_vf(adev)) {
r = device_create_file(adev->dev, &dev_attr_enforce_isolation);
if (r)
return r;
}
r = device_create_file(adev->dev, &dev_attr_enforce_isolation);
if (r)
return r;
r = device_create_file(adev->dev, &dev_attr_run_cleaner_shader);
if (r)
@ -1650,8 +1648,7 @@ int amdgpu_gfx_sysfs_isolation_shader_init(struct amdgpu_device *adev)
void amdgpu_gfx_sysfs_isolation_shader_fini(struct amdgpu_device *adev)
{
if (!amdgpu_sriov_vf(adev))
device_remove_file(adev->dev, &dev_attr_enforce_isolation);
device_remove_file(adev->dev, &dev_attr_enforce_isolation);
device_remove_file(adev->dev, &dev_attr_run_cleaner_shader);
}

View File

@ -1203,8 +1203,10 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
r = amdgpu_ring_init(adev, ring, 1024, NULL, 0,
AMDGPU_RING_PRIO_DEFAULT, NULL);
if (r)
if (r) {
amdgpu_mes_unlock(&adev->mes);
goto clean_up_memory;
}
amdgpu_mes_ring_to_queue_props(adev, ring, &qprops);
@ -1237,7 +1239,6 @@ clean_up_ring:
amdgpu_ring_fini(ring);
clean_up_memory:
kfree(ring);
amdgpu_mes_unlock(&adev->mes);
return r;
}

View File

@ -621,7 +621,7 @@ static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe)
if (amdgpu_mes_log_enable) {
mes_set_hw_res_pkt.enable_mes_event_int_logging = 1;
mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr;
mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr + pipe * AMDGPU_MES_LOG_BUFFER_SIZE;
}
return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe,
@ -1336,7 +1336,7 @@ static int mes_v12_0_sw_init(void *handle)
adev->mes.kiq_hw_fini = &mes_v12_0_kiq_hw_fini;
adev->mes.enable_legacy_queue_map = true;
adev->mes.event_log_size = AMDGPU_MES_LOG_BUFFER_SIZE;
adev->mes.event_log_size = adev->enable_uni_mes ? (AMDGPU_MAX_MES_PIPES * AMDGPU_MES_LOG_BUFFER_SIZE) : AMDGPU_MES_LOG_BUFFER_SIZE;
r = amdgpu_mes_init(adev);
if (r)

View File

@ -1148,7 +1148,7 @@ static int kfd_ioctl_alloc_memory_of_gpu(struct file *filep,
if (flags & KFD_IOC_ALLOC_MEM_FLAGS_AQL_QUEUE_MEM)
size >>= 1;
WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + PAGE_ALIGN(size));
atomic64_add(PAGE_ALIGN(size), &pdd->vram_usage);
}
mutex_unlock(&p->mutex);
@ -1219,7 +1219,7 @@ static int kfd_ioctl_free_memory_of_gpu(struct file *filep,
kfd_process_device_remove_obj_handle(
pdd, GET_IDR_HANDLE(args->handle));
WRITE_ONCE(pdd->vram_usage, pdd->vram_usage - size);
atomic64_sub(size, &pdd->vram_usage);
err_unlock:
err_pdd:
@ -2347,7 +2347,7 @@ static int criu_restore_memory_of_gpu(struct kfd_process_device *pdd,
} else if (bo_bucket->alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_VRAM) {
bo_bucket->restored_offset = offset;
/* Update the VRAM usage count */
WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + bo_bucket->size);
atomic64_add(bo_bucket->size, &pdd->vram_usage);
}
return 0;
}

View File

@ -775,7 +775,7 @@ struct kfd_process_device {
enum kfd_pdd_bound bound;
/* VRAM usage */
uint64_t vram_usage;
atomic64_t vram_usage;
struct attribute attr_vram;
char vram_filename[MAX_SYSFS_FILENAME_LEN];

View File

@ -332,7 +332,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
} else if (strncmp(attr->name, "vram_", 5) == 0) {
struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device,
attr_vram);
return snprintf(buffer, PAGE_SIZE, "%llu\n", READ_ONCE(pdd->vram_usage));
return snprintf(buffer, PAGE_SIZE, "%llu\n", atomic64_read(&pdd->vram_usage));
} else if (strncmp(attr->name, "sdma_", 5) == 0) {
struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device,
attr_sdma);
@ -1625,7 +1625,7 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
pdd->bound = PDD_UNBOUND;
pdd->already_dequeued = false;
pdd->runtime_inuse = false;
pdd->vram_usage = 0;
atomic64_set(&pdd->vram_usage, 0);
pdd->sdma_past_activity_counter = 0;
pdd->user_gpu_id = dev->id;
atomic64_set(&pdd->evict_duration_counter, 0);
@ -1702,12 +1702,15 @@ int kfd_process_device_init_vm(struct kfd_process_device *pdd,
ret = amdgpu_amdkfd_gpuvm_acquire_process_vm(dev->adev, avm,
&p->kgd_process_info,
&ef);
p->ef ? NULL : &ef);
if (ret) {
dev_err(dev->adev->dev, "Failed to create process VM object\n");
return ret;
}
RCU_INIT_POINTER(p->ef, ef);
if (!p->ef)
RCU_INIT_POINTER(p->ef, ef);
pdd->drm_priv = drm_file->private_data;
ret = kfd_process_device_reserve_ib_mem(pdd);

View File

@ -405,6 +405,27 @@ static void svm_range_bo_release(struct kref *kref)
spin_lock(&svm_bo->list_lock);
}
spin_unlock(&svm_bo->list_lock);
if (mmget_not_zero(svm_bo->eviction_fence->mm)) {
struct kfd_process_device *pdd;
struct kfd_process *p;
struct mm_struct *mm;
mm = svm_bo->eviction_fence->mm;
/*
* The forked child process takes svm_bo device pages ref, svm_bo could be
* released after parent process is gone.
*/
p = kfd_lookup_process_by_mm(mm);
if (p) {
pdd = kfd_get_process_device_data(svm_bo->node, p);
if (pdd)
atomic64_sub(amdgpu_bo_size(svm_bo->bo), &pdd->vram_usage);
kfd_unref_process(p);
}
mmput(mm);
}
if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base))
/* We're not in the eviction worker. Signal the fence. */
dma_fence_signal(&svm_bo->eviction_fence->base);
@ -532,6 +553,7 @@ int
svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
bool clear)
{
struct kfd_process_device *pdd;
struct amdgpu_bo_param bp;
struct svm_range_bo *svm_bo;
struct amdgpu_bo_user *ubo;
@ -623,6 +645,10 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
list_add(&prange->svm_bo_list, &svm_bo->range_list);
spin_unlock(&svm_bo->list_lock);
pdd = svm_range_get_pdd_by_node(prange, node);
if (pdd)
atomic64_add(amdgpu_bo_size(bo), &pdd->vram_usage);
return 0;
reserve_bo_failed:

View File

@ -2972,10 +2972,11 @@ static int dm_suspend(void *handle)
hpd_rx_irq_work_suspend(dm);
if (adev->dm.dc->caps.ips_support)
dc_allow_idle_optimizations(adev->dm.dc, true);
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);
if (dm->dc->caps.ips_support && adev->in_s0ix)
dc_allow_idle_optimizations(dm->dc, true);
dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D3);
return 0;

View File

@ -5065,11 +5065,26 @@ static bool update_planes_and_stream_v3(struct dc *dc,
return true;
}
static void clear_update_flags(struct dc_surface_update *srf_updates,
int surface_count, struct dc_stream_state *stream)
{
int i;
if (stream)
stream->update_flags.raw = 0;
for (i = 0; i < surface_count; i++)
if (srf_updates[i].surface)
srf_updates[i].surface->update_flags.raw = 0;
}
bool dc_update_planes_and_stream(struct dc *dc,
struct dc_surface_update *srf_updates, int surface_count,
struct dc_stream_state *stream,
struct dc_stream_update *stream_update)
{
bool ret = false;
dc_exit_ips_for_hw_access(dc);
/*
* update planes and stream version 3 separates FULL and FAST updates
@ -5086,10 +5101,16 @@ bool dc_update_planes_and_stream(struct dc *dc,
* features as they are now transparent to the new sequence.
*/
if (dc->ctx->dce_version >= DCN_VERSION_4_01)
return update_planes_and_stream_v3(dc, srf_updates,
ret = update_planes_and_stream_v3(dc, srf_updates,
surface_count, stream, stream_update);
return update_planes_and_stream_v2(dc, srf_updates,
else
ret = update_planes_and_stream_v2(dc, srf_updates,
surface_count, stream, stream_update);
if (ret)
clear_update_flags(srf_updates, surface_count, stream);
return ret;
}
void dc_commit_updates_for_stream(struct dc *dc,
@ -5099,6 +5120,8 @@ void dc_commit_updates_for_stream(struct dc *dc,
struct dc_stream_update *stream_update,
struct dc_state *state)
{
bool ret = false;
dc_exit_ips_for_hw_access(dc);
/* TODO: Since change commit sequence can have a huge impact,
* we decided to only enable it for DCN3x. However, as soon as
@ -5106,17 +5129,17 @@ void dc_commit_updates_for_stream(struct dc *dc,
* the new sequence for all ASICs.
*/
if (dc->ctx->dce_version >= DCN_VERSION_4_01) {
update_planes_and_stream_v3(dc, srf_updates, surface_count,
ret = update_planes_and_stream_v3(dc, srf_updates, surface_count,
stream, stream_update);
return;
}
if (dc->ctx->dce_version >= DCN_VERSION_3_2) {
update_planes_and_stream_v2(dc, srf_updates, surface_count,
} else if (dc->ctx->dce_version >= DCN_VERSION_3_2) {
ret = update_planes_and_stream_v2(dc, srf_updates, surface_count,
stream, stream_update);
return;
}
update_planes_and_stream_v1(dc, srf_updates, surface_count, stream,
stream_update, state);
} else
ret = update_planes_and_stream_v1(dc, srf_updates, surface_count, stream,
stream_update, state);
if (ret)
clear_update_flags(srf_updates, surface_count, stream);
}
uint8_t dc_get_current_stream_count(struct dc *dc)

View File

@ -60,7 +60,7 @@ struct vi_dpm_level {
struct vi_dpm_table {
uint32_t count;
struct vi_dpm_level dpm_level[] __counted_by(count);
struct vi_dpm_level dpm_level[];
};
#define PCIE_PERF_REQ_REMOVE_REGISTRY 0
@ -91,7 +91,7 @@ struct phm_set_power_state_input {
struct phm_clock_array {
uint32_t count;
uint32_t values[] __counted_by(count);
uint32_t values[];
};
struct phm_clock_voltage_dependency_record {
@ -123,7 +123,7 @@ struct phm_acpclock_voltage_dependency_record {
struct phm_clock_voltage_dependency_table {
uint32_t count;
struct phm_clock_voltage_dependency_record entries[] __counted_by(count);
struct phm_clock_voltage_dependency_record entries[];
};
struct phm_phase_shedding_limits_record {
@ -140,7 +140,7 @@ struct phm_uvd_clock_voltage_dependency_record {
struct phm_uvd_clock_voltage_dependency_table {
uint8_t count;
struct phm_uvd_clock_voltage_dependency_record entries[] __counted_by(count);
struct phm_uvd_clock_voltage_dependency_record entries[];
};
struct phm_acp_clock_voltage_dependency_record {
@ -150,7 +150,7 @@ struct phm_acp_clock_voltage_dependency_record {
struct phm_acp_clock_voltage_dependency_table {
uint32_t count;
struct phm_acp_clock_voltage_dependency_record entries[] __counted_by(count);
struct phm_acp_clock_voltage_dependency_record entries[];
};
struct phm_vce_clock_voltage_dependency_record {
@ -161,32 +161,32 @@ struct phm_vce_clock_voltage_dependency_record {
struct phm_phase_shedding_limits_table {
uint32_t count;
struct phm_phase_shedding_limits_record entries[] __counted_by(count);
struct phm_phase_shedding_limits_record entries[];
};
struct phm_vceclock_voltage_dependency_table {
uint8_t count;
struct phm_vceclock_voltage_dependency_record entries[] __counted_by(count);
struct phm_vceclock_voltage_dependency_record entries[];
};
struct phm_uvdclock_voltage_dependency_table {
uint8_t count;
struct phm_uvdclock_voltage_dependency_record entries[] __counted_by(count);
struct phm_uvdclock_voltage_dependency_record entries[];
};
struct phm_samuclock_voltage_dependency_table {
uint8_t count;
struct phm_samuclock_voltage_dependency_record entries[] __counted_by(count);
struct phm_samuclock_voltage_dependency_record entries[];
};
struct phm_acpclock_voltage_dependency_table {
uint32_t count;
struct phm_acpclock_voltage_dependency_record entries[] __counted_by(count);
struct phm_acpclock_voltage_dependency_record entries[];
};
struct phm_vce_clock_voltage_dependency_table {
uint8_t count;
struct phm_vce_clock_voltage_dependency_record entries[] __counted_by(count);
struct phm_vce_clock_voltage_dependency_record entries[];
};
@ -393,7 +393,7 @@ union phm_cac_leakage_record {
struct phm_cac_leakage_table {
uint32_t count;
union phm_cac_leakage_record entries[] __counted_by(count);
union phm_cac_leakage_record entries[];
};
struct phm_samu_clock_voltage_dependency_record {
@ -404,7 +404,7 @@ struct phm_samu_clock_voltage_dependency_record {
struct phm_samu_clock_voltage_dependency_table {
uint8_t count;
struct phm_samu_clock_voltage_dependency_record entries[] __counted_by(count);
struct phm_samu_clock_voltage_dependency_record entries[];
};
struct phm_cac_tdp_table {

View File

@ -1264,7 +1264,11 @@ static int smu_sw_init(void *handle)
smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4;
smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
if (smu->is_apu)
smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
else
smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
@ -2226,7 +2230,7 @@ static int smu_bump_power_profile_mode(struct smu_context *smu,
static int smu_adjust_power_state_dynamic(struct smu_context *smu,
enum amd_dpm_forced_level level,
bool skip_display_settings,
bool force_update)
bool init)
{
int ret = 0;
int index = 0;
@ -2255,7 +2259,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
}
}
if (force_update || smu_dpm_ctx->dpm_level != level) {
if (smu_dpm_ctx->dpm_level != level) {
ret = smu_asic_set_performance_level(smu, level);
if (ret) {
dev_err(smu->adev->dev, "Failed to set performance level!");
@ -2272,7 +2276,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
workload[0] = smu->workload_setting[index];
if (force_update || smu->power_profile_mode != workload[0])
if (init || smu->power_profile_mode != workload[0])
smu_bump_power_profile_mode(smu, workload, 0);
}

View File

@ -2555,18 +2555,16 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
workload_mask = 1 << workload_type;
/* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE) {
if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
((smu->adev->pm.fw_version == 0x004e6601) ||
(smu->adev->pm.fw_version >= 0x004e7300))) ||
(amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
smu->adev->pm.fw_version >= 0x00504500)) {
workload_type = smu_cmn_to_asic_specific_index(smu,
CMN2ASIC_MAPPING_WORKLOAD,
PP_SMC_POWER_PROFILE_POWERSAVING);
if (workload_type >= 0)
workload_mask |= 1 << workload_type;
}
if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
((smu->adev->pm.fw_version == 0x004e6601) ||
(smu->adev->pm.fw_version >= 0x004e7300))) ||
(amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
smu->adev->pm.fw_version >= 0x00504500)) {
workload_type = smu_cmn_to_asic_specific_index(smu,
CMN2ASIC_MAPPING_WORKLOAD,
PP_SMC_POWER_PROFILE_POWERSAVING);
if (workload_type >= 0)
workload_mask |= 1 << workload_type;
}
ret = smu_cmn_send_smc_msg_with_param(smu,

View File

@ -29,6 +29,8 @@ static int ast_sil164_connector_helper_get_modes(struct drm_connector *connector
if (ast_connector->physical_status == connector_status_connected) {
count = drm_connector_helper_get_modes(connector);
} else {
drm_edid_connector_update(connector, NULL);
/*
* There's no EDID data without a connected monitor. Set BMC-
* compatible modes in this case. The XGA default resolution

View File

@ -29,6 +29,8 @@ static int ast_vga_connector_helper_get_modes(struct drm_connector *connector)
if (ast_connector->physical_status == connector_status_connected) {
count = drm_connector_helper_get_modes(connector);
} else {
drm_edid_connector_update(connector, NULL);
/*
* There's no EDID data without a connected monitor. Set BMC-
* compatible modes in this case. The XGA default resolution

View File

@ -50,7 +50,8 @@ static void drm_fbdev_dma_fb_destroy(struct fb_info *info)
if (!fb_helper->dev)
return;
fb_deferred_io_cleanup(info);
if (info->fbdefio)
fb_deferred_io_cleanup(info);
drm_fb_helper_fini(fb_helper);
drm_client_buffer_vunmap(fb_helper->buffer);

View File

@ -89,25 +89,19 @@ static int intel_dp_mst_max_dpt_bpp(const struct intel_crtc_state *crtc_state,
static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state,
const struct intel_connector *connector,
bool ssc, bool dsc, int bpp_x16)
bool ssc, int dsc_slice_count, int bpp_x16)
{
const struct drm_display_mode *adjusted_mode =
&crtc_state->hw.adjusted_mode;
unsigned long flags = DRM_DP_BW_OVERHEAD_MST;
int dsc_slice_count = 0;
int overhead;
flags |= intel_dp_is_uhbr(crtc_state) ? DRM_DP_BW_OVERHEAD_UHBR : 0;
flags |= ssc ? DRM_DP_BW_OVERHEAD_SSC_REF_CLK : 0;
flags |= crtc_state->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
if (dsc) {
if (dsc_slice_count)
flags |= DRM_DP_BW_OVERHEAD_DSC;
dsc_slice_count = intel_dp_dsc_get_slice_count(connector,
adjusted_mode->clock,
adjusted_mode->hdisplay,
crtc_state->joiner_pipes);
}
overhead = drm_dp_bw_overhead(crtc_state->lane_count,
adjusted_mode->hdisplay,
@ -153,6 +147,19 @@ static int intel_dp_mst_calc_pbn(int pixel_clock, int bpp_x16, int bw_overhead)
return DIV_ROUND_UP(effective_data_rate * 64, 54 * 1000);
}
static int intel_dp_mst_dsc_get_slice_count(const struct intel_connector *connector,
const struct intel_crtc_state *crtc_state)
{
const struct drm_display_mode *adjusted_mode =
&crtc_state->hw.adjusted_mode;
int num_joined_pipes = crtc_state->joiner_pipes;
return intel_dp_dsc_get_slice_count(connector,
adjusted_mode->clock,
adjusted_mode->hdisplay,
num_joined_pipes);
}
static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state,
int max_bpp,
@ -172,6 +179,7 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
const struct drm_display_mode *adjusted_mode =
&crtc_state->hw.adjusted_mode;
int bpp, slots = -EINVAL;
int dsc_slice_count = 0;
int max_dpt_bpp;
int ret = 0;
@ -203,6 +211,15 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
drm_dbg_kms(&i915->drm, "Looking for slots in range min bpp %d max bpp %d\n",
min_bpp, max_bpp);
if (dsc) {
dsc_slice_count = intel_dp_mst_dsc_get_slice_count(connector, crtc_state);
if (!dsc_slice_count) {
drm_dbg_kms(&i915->drm, "Can't get valid DSC slice count\n");
return -ENOSPC;
}
}
for (bpp = max_bpp; bpp >= min_bpp; bpp -= step) {
int local_bw_overhead;
int remote_bw_overhead;
@ -216,9 +233,9 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
intel_dp_output_bpp(crtc_state->output_format, bpp));
local_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector,
false, dsc, link_bpp_x16);
false, dsc_slice_count, link_bpp_x16);
remote_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector,
true, dsc, link_bpp_x16);
true, dsc_slice_count, link_bpp_x16);
intel_dp_mst_compute_m_n(crtc_state, connector,
local_bw_overhead,
@ -449,6 +466,9 @@ hblank_expansion_quirk_needs_dsc(const struct intel_connector *connector,
if (mode_hblank_period_ns(adjusted_mode) > hblank_limit)
return false;
if (!intel_dp_mst_dsc_get_slice_count(connector, crtc_state))
return false;
return true;
}

View File

@ -438,6 +438,19 @@ bool intel_fb_needs_64k_phys(u64 modifier)
INTEL_PLANE_CAP_NEED64K_PHYS);
}
/**
* intel_fb_is_tile4_modifier: Check if a modifier is a tile4 modifier type
* @modifier: Modifier to check
*
* Returns:
* Returns %true if @modifier is a tile4 modifier.
*/
bool intel_fb_is_tile4_modifier(u64 modifier)
{
return plane_caps_contain_any(lookup_modifier(modifier)->plane_caps,
INTEL_PLANE_CAP_TILING_4);
}
static bool check_modifier_display_ver_range(const struct intel_modifier_desc *md,
u8 display_ver_from, u8 display_ver_until)
{

View File

@ -35,6 +35,7 @@ bool intel_fb_is_ccs_modifier(u64 modifier);
bool intel_fb_is_rc_ccs_cc_modifier(u64 modifier);
bool intel_fb_is_mc_ccs_modifier(u64 modifier);
bool intel_fb_needs_64k_phys(u64 modifier);
bool intel_fb_is_tile4_modifier(u64 modifier);
bool intel_fb_is_ccs_aux_plane(const struct drm_framebuffer *fb, int color_plane);
int intel_fb_rc_ccs_cc_plane(const struct drm_framebuffer *fb);

View File

@ -1094,7 +1094,8 @@ static void intel_hdcp_update_value(struct intel_connector *connector,
hdcp->value = value;
if (update_property) {
drm_connector_get(&connector->base);
queue_work(i915->unordered_wq, &hdcp->prop_work);
if (!queue_work(i915->unordered_wq, &hdcp->prop_work))
drm_connector_put(&connector->base);
}
}
@ -2524,7 +2525,8 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
mutex_lock(&hdcp->mutex);
hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
drm_connector_get(&connector->base);
queue_work(i915->unordered_wq, &hdcp->prop_work);
if (!queue_work(i915->unordered_wq, &hdcp->prop_work))
drm_connector_put(&connector->base);
mutex_unlock(&hdcp->mutex);
}
@ -2541,7 +2543,9 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
*/
if (!desired_and_not_enabled && !content_protection_type_changed) {
drm_connector_get(&connector->base);
queue_work(i915->unordered_wq, &hdcp->prop_work);
if (!queue_work(i915->unordered_wq, &hdcp->prop_work))
drm_connector_put(&connector->base);
}
}

Some files were not shown because too many files have changed in this diff Show More