Merge branch 'topic/cirrus-hp-g12' into for-linus

Pull Cirrus HD-audio quirks for HP G12 laptops.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
This commit is contained in:
Takashi Iwai 2024-08-12 09:17:57 +02:00
commit eb75d05d96
237 changed files with 1914 additions and 881 deletions

View File

@ -166,6 +166,7 @@ Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com>
Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com> Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com>
David Brownell <david-b@pacbell.net> David Brownell <david-b@pacbell.net>
David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org>
David Heidelberg <david@ixit.cz> <d.okias@gmail.com>
David Rheinsberg <david@readahead.eu> <dh.herrmann@gmail.com> David Rheinsberg <david@readahead.eu> <dh.herrmann@gmail.com>
David Rheinsberg <david@readahead.eu> <dh.herrmann@googlemail.com> David Rheinsberg <david@readahead.eu> <dh.herrmann@googlemail.com>
David Rheinsberg <david@readahead.eu> <david.rheinsberg@gmail.com> David Rheinsberg <david@readahead.eu> <david.rheinsberg@gmail.com>

View File

@ -32,9 +32,9 @@ Description: (RW) The front button on the Turris Omnia router can be
interrupt. interrupt.
This file switches between these two modes: This file switches between these two modes:
- "mcu" makes the button press event be handled by the MCU to - ``mcu`` makes the button press event be handled by the MCU to
change the LEDs panel intensity. change the LEDs panel intensity.
- "cpu" makes the button press event be handled by the CPU. - ``cpu`` makes the button press event be handled by the CPU.
Format: %s. Format: %s.

View File

@ -742,7 +742,7 @@ SecurityFlags Flags which control security negotiation and
may use NTLMSSP 0x00080 may use NTLMSSP 0x00080
must use NTLMSSP 0x80080 must use NTLMSSP 0x80080
seal (packet encryption) 0x00040 seal (packet encryption) 0x00040
must seal (not implemented yet) 0x40040 must seal 0x40040
cifsFYI If set to non-zero value, additional debug information cifsFYI If set to non-zero value, additional debug information
will be logged to the system error log. This field will be logged to the system error log. This field

View File

@ -17,9 +17,12 @@ properties:
oneOf: oneOf:
# Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel # Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel
- const: samsung,atna33xc20 - const: samsung,atna33xc20
# Samsung 14.5" WQXGA+ (2880x1800 pixels) eDP AMOLED panel
- items: - items:
- const: samsung,atna45af01 - enum:
# Samsung 14.5" WQXGA+ (2880x1800 pixels) eDP AMOLED panel
- samsung,atna45af01
# Samsung 14.5" 3K (2944x1840 pixels) eDP AMOLED panel
- samsung,atna45dc02
- const: samsung,atna33xc20 - const: samsung,atna33xc20
enable-gpios: true enable-gpios: true

View File

@ -18,6 +18,7 @@ properties:
- usb424,2412 - usb424,2412
- usb424,2417 - usb424,2417
- usb424,2514 - usb424,2514
- usb424,2517
reg: true reg: true

View File

@ -13,9 +13,9 @@ kernel.
Hardware issues like Meltdown, Spectre, L1TF etc. must be treated Hardware issues like Meltdown, Spectre, L1TF etc. must be treated
differently because they usually affect all Operating Systems ("OS") and differently because they usually affect all Operating Systems ("OS") and
therefore need coordination across different OS vendors, distributions, therefore need coordination across different OS vendors, distributions,
hardware vendors and other parties. For some of the issues, software silicon vendors, hardware integrators, and other parties. For some of the
mitigations can depend on microcode or firmware updates, which need further issues, software mitigations can depend on microcode or firmware updates,
coordination. which need further coordination.
.. _Contact: .. _Contact:
@ -32,8 +32,8 @@ Linux kernel security team (:ref:`Documentation/admin-guide/
<securitybugs>`) instead. <securitybugs>`) instead.
The team can be contacted by email at <hardware-security@kernel.org>. This The team can be contacted by email at <hardware-security@kernel.org>. This
is a private list of security officers who will help you to coordinate a is a private list of security officers who will help you coordinate a fix
fix according to our documented process. according to our documented process.
The list is encrypted and email to the list can be sent by either PGP or The list is encrypted and email to the list can be sent by either PGP or
S/MIME encrypted and must be signed with the reporter's PGP key or S/MIME S/MIME encrypted and must be signed with the reporter's PGP key or S/MIME
@ -43,7 +43,7 @@ the following URLs:
- PGP: https://www.kernel.org/static/files/hardware-security.asc - PGP: https://www.kernel.org/static/files/hardware-security.asc
- S/MIME: https://www.kernel.org/static/files/hardware-security.crt - S/MIME: https://www.kernel.org/static/files/hardware-security.crt
While hardware security issues are often handled by the affected hardware While hardware security issues are often handled by the affected silicon
vendor, we welcome contact from researchers or individuals who have vendor, we welcome contact from researchers or individuals who have
identified a potential hardware flaw. identified a potential hardware flaw.
@ -65,7 +65,7 @@ of Linux Foundation's IT operations personnel technically have the
ability to access the embargoed information, but are obliged to ability to access the embargoed information, but are obliged to
confidentiality by their employment contract. Linux Foundation IT confidentiality by their employment contract. Linux Foundation IT
personnel are also responsible for operating and managing the rest of personnel are also responsible for operating and managing the rest of
kernel.org infrastructure. kernel.org's infrastructure.
The Linux Foundation's current director of IT Project infrastructure is The Linux Foundation's current director of IT Project infrastructure is
Konstantin Ryabitsev. Konstantin Ryabitsev.
@ -85,7 +85,7 @@ Memorandum of Understanding
The Linux kernel community has a deep understanding of the requirement to The Linux kernel community has a deep understanding of the requirement to
keep hardware security issues under embargo for coordination between keep hardware security issues under embargo for coordination between
different OS vendors, distributors, hardware vendors and other parties. different OS vendors, distributors, silicon vendors, and other parties.
The Linux kernel community has successfully handled hardware security The Linux kernel community has successfully handled hardware security
issues in the past and has the necessary mechanisms in place to allow issues in the past and has the necessary mechanisms in place to allow
@ -103,11 +103,11 @@ the issue in the best technical way.
All involved developers pledge to adhere to the embargo rules and to keep All involved developers pledge to adhere to the embargo rules and to keep
the received information confidential. Violation of the pledge will lead to the received information confidential. Violation of the pledge will lead to
immediate exclusion from the current issue and removal from all related immediate exclusion from the current issue and removal from all related
mailing-lists. In addition, the hardware security team will also exclude mailing lists. In addition, the hardware security team will also exclude
the offender from future issues. The impact of this consequence is a highly the offender from future issues. The impact of this consequence is a highly
effective deterrent in our community. In case a violation happens the effective deterrent in our community. In case a violation happens the
hardware security team will inform the involved parties immediately. If you hardware security team will inform the involved parties immediately. If you
or anyone becomes aware of a potential violation, please report it or anyone else becomes aware of a potential violation, please report it
immediately to the Hardware security officers. immediately to the Hardware security officers.
@ -124,14 +124,16 @@ method for these types of issues.
Start of Disclosure Start of Disclosure
""""""""""""""""""" """""""""""""""""""
Disclosure starts by contacting the Linux kernel hardware security team by Disclosure starts by emailing the Linux kernel hardware security team per
email. This initial contact should contain a description of the problem and the Contact section above. This initial contact should contain a
a list of any known affected hardware. If your organization builds or description of the problem and a list of any known affected silicon. If
distributes the affected hardware, we encourage you to also consider what your organization builds or distributes the affected hardware, we encourage
other hardware could be affected. you to also consider what other hardware could be affected. The disclosing
party is responsible for contacting the affected silicon vendors in a
timely manner.
The hardware security team will provide an incident-specific encrypted The hardware security team will provide an incident-specific encrypted
mailing-list which will be used for initial discussion with the reporter, mailing list which will be used for initial discussion with the reporter,
further disclosure, and coordination of fixes. further disclosure, and coordination of fixes.
The hardware security team will provide the disclosing party a list of The hardware security team will provide the disclosing party a list of
@ -158,8 +160,8 @@ This serves several purposes:
- The disclosed entities can be contacted to name experts who should - The disclosed entities can be contacted to name experts who should
participate in the mitigation development. participate in the mitigation development.
- If an expert which is required to handle an issue is employed by an - If an expert who is required to handle an issue is employed by a listed
listed entity or member of an listed entity, then the response teams can entity or member of an listed entity, then the response teams can
request the disclosure of that expert from that entity. This ensures request the disclosure of that expert from that entity. This ensures
that the expert is also part of the entity's response team. that the expert is also part of the entity's response team.
@ -169,8 +171,8 @@ Disclosure
The disclosing party provides detailed information to the initial response The disclosing party provides detailed information to the initial response
team via the specific encrypted mailing-list. team via the specific encrypted mailing-list.
From our experience the technical documentation of these issues is usually From our experience, the technical documentation of these issues is usually
a sufficient starting point and further technical clarification is best a sufficient starting point, and further technical clarification is best
done via email. done via email.
Mitigation development Mitigation development
@ -179,57 +181,93 @@ Mitigation development
The initial response team sets up an encrypted mailing-list or repurposes The initial response team sets up an encrypted mailing-list or repurposes
an existing one if appropriate. an existing one if appropriate.
Using a mailing-list is close to the normal Linux development process and Using a mailing list is close to the normal Linux development process and
has been successfully used in developing mitigations for various hardware has been successfully used to develop mitigations for various hardware
security issues in the past. security issues in the past.
The mailing-list operates in the same way as normal Linux development. The mailing list operates in the same way as normal Linux development.
Patches are posted, discussed and reviewed and if agreed on applied to a Patches are posted, discussed, and reviewed and if agreed upon, applied to
non-public git repository which is only accessible to the participating a non-public git repository which is only accessible to the participating
developers via a secure connection. The repository contains the main developers via a secure connection. The repository contains the main
development branch against the mainline kernel and backport branches for development branch against the mainline kernel and backport branches for
stable kernel versions as necessary. stable kernel versions as necessary.
The initial response team will identify further experts from the Linux The initial response team will identify further experts from the Linux
kernel developer community as needed. Bringing in experts can happen at any kernel developer community as needed. Any involved party can suggest
time of the development process and needs to be handled in a timely manner. further experts to be included, each of which will be subject to the same
requirements outlined above.
If an expert is employed by or member of an entity on the disclosure list Bringing in experts can happen at any time in the development process and
needs to be handled in a timely manner.
If an expert is employed by or a member of an entity on the disclosure list
provided by the disclosing party, then participation will be requested from provided by the disclosing party, then participation will be requested from
the relevant entity. the relevant entity.
If not, then the disclosing party will be informed about the experts If not, then the disclosing party will be informed about the experts'
participation. The experts are covered by the Memorandum of Understanding participation. The experts are covered by the Memorandum of Understanding
and the disclosing party is requested to acknowledge the participation. In and the disclosing party is requested to acknowledge their participation.
case that the disclosing party has a compelling reason to object, then this In the case where the disclosing party has a compelling reason to object,
objection has to be raised within five work days and resolved with the any objection must to be raised within five working days and resolved with
incident team immediately. If the disclosing party does not react within the incident team immediately. If the disclosing party does not react
five work days this is taken as silent acknowledgement. within five working days this is taken as silent acknowledgment.
After acknowledgement or resolution of an objection the expert is disclosed After the incident team acknowledges or resolves an objection, the expert
by the incident team and brought into the development process. is disclosed and brought into the development process.
List participants may not communicate about the issue outside of the List participants may not communicate about the issue outside of the
private mailing list. List participants may not use any shared resources private mailing list. List participants may not use any shared resources
(e.g. employer build farms, CI systems, etc) when working on patches. (e.g. employer build farms, CI systems, etc) when working on patches.
Early access
""""""""""""
The patches discussed and developed on the list can neither be distributed
to any individual who is not a member of the response team nor to any other
organization.
To allow the affected silicon vendors to work with their internal teams and
industry partners on testing, validation, and logistics, the following
exception is provided:
Designated representatives of the affected silicon vendors are
allowed to hand over the patches at any time to the silicon
vendors response team. The representative must notify the kernel
response team about the handover. The affected silicon vendor must
have and maintain their own documented security process for any
patches shared with their response team that is consistent with
this policy.
The silicon vendors response team can distribute these patches to
their industry partners and to their internal teams under the
silicon vendors documented security process. Feedback from the
industry partners goes back to the silicon vendor and is
communicated by the silicon vendor to the kernel response team.
The handover to the silicon vendors response team removes any
responsibility or liability from the kernel response team regarding
premature disclosure, which happens due to the involvement of the
silicon vendors internal teams or industry partners. The silicon
vendor guarantees this release of liability by agreeing to this
process.
Coordinated release Coordinated release
""""""""""""""""""" """""""""""""""""""
The involved parties will negotiate the date and time where the embargo The involved parties will negotiate the date and time when the embargo
ends. At that point the prepared mitigations are integrated into the ends. At that point, the prepared mitigations are published into the
relevant kernel trees and published. There is no pre-notification process: relevant kernel trees. There is no pre-notification process: the
fixes are published in public and available to everyone at the same time. mitigations are published in public and available to everyone at the same
time.
While we understand that hardware security issues need coordinated embargo While we understand that hardware security issues need coordinated embargo
time, the embargo time should be constrained to the minimum time which is time, the embargo time should be constrained to the minimum time that is
required for all involved parties to develop, test and prepare the required for all involved parties to develop, test, and prepare their
mitigations. Extending embargo time artificially to meet conference talk mitigations. Extending embargo time artificially to meet conference talk
dates or other non-technical reasons is creating more work and burden for dates or other non-technical reasons creates more work and burden for the
the involved developers and response teams as the patches need to be kept involved developers and response teams as the patches need to be kept up to
up to date in order to follow the ongoing upstream kernel development, date in order to follow the ongoing upstream kernel development, which
which might create conflicting changes. might create conflicting changes.
CVE assignment CVE assignment
"""""""""""""" """"""""""""""
@ -275,34 +313,35 @@ an involved disclosed party. The current ambassadors list:
If you want your organization to be added to the ambassadors list, please If you want your organization to be added to the ambassadors list, please
contact the hardware security team. The nominated ambassador has to contact the hardware security team. The nominated ambassador has to
understand and support our process fully and is ideally well connected in understand and support our process fully and is ideally well-connected in
the Linux kernel community. the Linux kernel community.
Encrypted mailing-lists Encrypted mailing-lists
----------------------- -----------------------
We use encrypted mailing-lists for communication. The operating principle We use encrypted mailing lists for communication. The operating principle
of these lists is that email sent to the list is encrypted either with the of these lists is that email sent to the list is encrypted either with the
list's PGP key or with the list's S/MIME certificate. The mailing-list list's PGP key or with the list's S/MIME certificate. The mailing list
software decrypts the email and re-encrypts it individually for each software decrypts the email and re-encrypts it individually for each
subscriber with the subscriber's PGP key or S/MIME certificate. Details subscriber with the subscriber's PGP key or S/MIME certificate. Details
about the mailing-list software and the setup which is used to ensure the about the mailing list software and the setup that is used to ensure the
security of the lists and protection of the data can be found here: security of the lists and protection of the data can be found here:
https://korg.wiki.kernel.org/userdoc/remail. https://korg.wiki.kernel.org/userdoc/remail.
List keys List keys
^^^^^^^^^ ^^^^^^^^^
For initial contact see :ref:`Contact`. For incident specific mailing-lists For initial contact see the :ref:`Contact` section above. For incident
the key and S/MIME certificate are conveyed to the subscribers by email specific mailing lists, the key and S/MIME certificate are conveyed to the
sent from the specific list. subscribers by email sent from the specific list.
Subscription to incident specific lists Subscription to incident-specific lists
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Subscription is handled by the response teams. Disclosed parties who want Subscription to incident-specific lists is handled by the response teams.
to participate in the communication send a list of potential subscribers to Disclosed parties who want to participate in the communication send a list
the response team so the response team can validate subscription requests. of potential experts to the response team so the response team can validate
subscription requests.
Each subscriber needs to send a subscription request to the response team Each subscriber needs to send a subscription request to the response team
by email. The email must be signed with the subscriber's PGP key or S/MIME by email. The email must be signed with the subscriber's PGP key or S/MIME

View File

@ -130,12 +130,12 @@ data using the `bmfdec <https://github.com/pali/bmfdec>`_ utility:
Due to a peculiarity in how Windows handles the ``CreateByteField()`` ACPI operator (errors only Due to a peculiarity in how Windows handles the ``CreateByteField()`` ACPI operator (errors only
happen when a invalid byte field is ultimately accessed), all methods require a 32 byte input happen when a invalid byte field is ultimately accessed), all methods require a 32 byte input
buffer, even if the Binay MOF says otherwise. buffer, even if the Binary MOF says otherwise.
The input buffer contains a single byte to select the subfeature to be accessed and 31 bytes of The input buffer contains a single byte to select the subfeature to be accessed and 31 bytes of
input data, the meaning of which depends on the subfeature being accessed. input data, the meaning of which depends on the subfeature being accessed.
The output buffer contains a singe byte which signals success or failure (``0x00`` on failure) The output buffer contains a single byte which signals success or failure (``0x00`` on failure)
and 31 bytes of output data, the meaning if which depends on the subfeature being accessed. and 31 bytes of output data, the meaning if which depends on the subfeature being accessed.
WMI method Get_EC() WMI method Get_EC()
@ -147,7 +147,7 @@ data contains a flag byte and a 28 byte controller firmware version string.
The first 4 bits of the flag byte contain the minor version of the embedded controller interface, The first 4 bits of the flag byte contain the minor version of the embedded controller interface,
with the next 2 bits containing the major version of the embedded controller interface. with the next 2 bits containing the major version of the embedded controller interface.
The 7th bit signals if the embedded controller page chaged (exact meaning is unknown), and the The 7th bit signals if the embedded controller page changed (exact meaning is unknown), and the
last bit signals if the platform is a Tigerlake platform. last bit signals if the platform is a Tigerlake platform.
The MSI software seems to only use this interface when the last bit is set. The MSI software seems to only use this interface when the last bit is set.

View File

@ -13324,14 +13324,16 @@ F: Documentation/devicetree/bindings/i2c/i2c-mux-ltc4306.txt
F: drivers/i2c/muxes/i2c-mux-ltc4306.c F: drivers/i2c/muxes/i2c-mux-ltc4306.c
LTP (Linux Test Project) LTP (Linux Test Project)
M: Andrea Cervesato <andrea.cervesato@suse.com>
M: Cyril Hrubis <chrubis@suse.cz> M: Cyril Hrubis <chrubis@suse.cz>
M: Jan Stancek <jstancek@redhat.com> M: Jan Stancek <jstancek@redhat.com>
M: Petr Vorel <pvorel@suse.cz> M: Petr Vorel <pvorel@suse.cz>
M: Li Wang <liwang@redhat.com> M: Li Wang <liwang@redhat.com>
M: Yang Xu <xuyang2018.jy@fujitsu.com> M: Yang Xu <xuyang2018.jy@fujitsu.com>
M: Xiao Yang <yangx.jy@fujitsu.com>
L: ltp@lists.linux.it (subscribers-only) L: ltp@lists.linux.it (subscribers-only)
S: Maintained S: Maintained
W: http://linux-test-project.github.io/ W: https://linux-test-project.readthedocs.io/
T: git https://github.com/linux-test-project/ltp.git T: git https://github.com/linux-test-project/ltp.git
LTR390 AMBIENT/UV LIGHT SENSOR DRIVER LTR390 AMBIENT/UV LIGHT SENSOR DRIVER
@ -13539,7 +13541,7 @@ MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)
M: Mirko Lindner <mlindner@marvell.com> M: Mirko Lindner <mlindner@marvell.com>
M: Stephen Hemminger <stephen@networkplumber.org> M: Stephen Hemminger <stephen@networkplumber.org>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Odd fixes
F: drivers/net/ethernet/marvell/sk* F: drivers/net/ethernet/marvell/sk*
MARVELL LIBERTAS WIRELESS DRIVER MARVELL LIBERTAS WIRELESS DRIVER

View File

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 11 PATCHLEVEL = 11
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc2 EXTRAVERSION = -rc3
NAME = Baby Opossum Posse NAME = Baby Opossum Posse
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -21,6 +21,7 @@
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
#include <linux/gpio/machine.h> #include <linux/gpio/machine.h>
#include <linux/gpio/property.h>
#include <linux/gpio.h> #include <linux/gpio.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/clk.h> #include <linux/clk.h>
@ -40,6 +41,7 @@
#include <linux/platform_data/mmc-pxamci.h> #include <linux/platform_data/mmc-pxamci.h>
#include "udc.h" #include "udc.h"
#include "gumstix.h" #include "gumstix.h"
#include "devices.h"
#include "generic.h" #include "generic.h"
@ -99,8 +101,8 @@ static void __init gumstix_mmc_init(void)
} }
#endif #endif
#ifdef CONFIG_USB_PXA25X #if IS_ENABLED(CONFIG_USB_PXA25X)
static const struct property_entry spitz_mci_props[] __initconst = { static const struct property_entry gumstix_vbus_props[] __initconst = {
PROPERTY_ENTRY_GPIO("vbus-gpios", &pxa2xx_gpiochip_node, PROPERTY_ENTRY_GPIO("vbus-gpios", &pxa2xx_gpiochip_node,
GPIO_GUMSTIX_USB_GPIOn, GPIO_ACTIVE_HIGH), GPIO_GUMSTIX_USB_GPIOn, GPIO_ACTIVE_HIGH),
PROPERTY_ENTRY_GPIO("pullup-gpios", &pxa2xx_gpiochip_node, PROPERTY_ENTRY_GPIO("pullup-gpios", &pxa2xx_gpiochip_node,
@ -111,6 +113,7 @@ static const struct property_entry spitz_mci_props[] __initconst = {
static const struct platform_device_info gumstix_gpio_vbus_info __initconst = { static const struct platform_device_info gumstix_gpio_vbus_info __initconst = {
.name = "gpio-vbus", .name = "gpio-vbus",
.id = PLATFORM_DEVID_NONE, .id = PLATFORM_DEVID_NONE,
.properties = gumstix_vbus_props,
}; };
static void __init gumstix_udc_init(void) static void __init gumstix_udc_init(void)

View File

@ -43,15 +43,6 @@
sound-dai = <&mcasp0>; sound-dai = <&mcasp0>;
}; };
}; };
reg_usb_hub: regulator-usb-hub {
compatible = "regulator-fixed";
enable-active-high;
/* Verdin CTRL_SLEEP_MOCI# (SODIMM 256) */
gpio = <&main_gpio0 31 GPIO_ACTIVE_HIGH>;
regulator-boot-on;
regulator-name = "HUB_PWR_EN";
};
}; };
/* Verdin ETHs */ /* Verdin ETHs */
@ -193,11 +184,6 @@
status = "okay"; status = "okay";
}; };
/* Do not force CTRL_SLEEP_MOCI# always enabled */
&reg_force_sleep_moci {
status = "disabled";
};
/* Verdin SD_1 */ /* Verdin SD_1 */
&sdhci1 { &sdhci1 {
status = "okay"; status = "okay";
@ -218,15 +204,7 @@
}; };
&usb1 { &usb1 {
#address-cells = <1>;
#size-cells = <0>;
status = "okay"; status = "okay";
usb-hub@1 {
compatible = "usb424,2744";
reg = <1>;
vdd-supply = <&reg_usb_hub>;
};
}; };
/* Verdin CTRL_WAKE1_MICO# */ /* Verdin CTRL_WAKE1_MICO# */

View File

@ -138,12 +138,6 @@
vin-supply = <&reg_1v8>; vin-supply = <&reg_1v8>;
}; };
/*
* By default we enable CTRL_SLEEP_MOCI#, this is required to have
* peripherals on the carrier board powered.
* If more granularity or power saving is required this can be disabled
* in the carrier board device tree files.
*/
reg_force_sleep_moci: regulator-force-sleep-moci { reg_force_sleep_moci: regulator-force-sleep-moci {
compatible = "regulator-fixed"; compatible = "regulator-fixed";
enable-active-high; enable-active-high;

View File

@ -146,6 +146,8 @@
power-domains = <&k3_pds 79 TI_SCI_PD_EXCLUSIVE>; power-domains = <&k3_pds 79 TI_SCI_PD_EXCLUSIVE>;
clocks = <&k3_clks 79 0>; clocks = <&k3_clks 79 0>;
clock-names = "gpio"; clock-names = "gpio";
gpio-ranges = <&mcu_pmx0 0 0 21>, <&mcu_pmx0 21 23 1>,
<&mcu_pmx0 22 32 2>;
}; };
mcu_rti0: watchdog@4880000 { mcu_rti0: watchdog@4880000 {

View File

@ -45,7 +45,8 @@
&main_pmx0 { &main_pmx0 {
pinctrl-single,gpio-range = pinctrl-single,gpio-range =
<&main_pmx0_range 0 32 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 0 32 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 33 92 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 33 38 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 72 22 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 137 5 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 137 5 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 143 3 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 143 3 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 149 2 PIN_GPIO_RANGE_IOPAD>; <&main_pmx0_range 149 2 PIN_GPIO_RANGE_IOPAD>;

View File

@ -193,7 +193,8 @@
&main_pmx0 { &main_pmx0 {
pinctrl-single,gpio-range = pinctrl-single,gpio-range =
<&main_pmx0_range 0 32 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 0 32 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 33 55 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 33 38 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 72 17 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 101 25 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 101 25 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 137 5 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 137 5 PIN_GPIO_RANGE_IOPAD>,
<&main_pmx0_range 143 3 PIN_GPIO_RANGE_IOPAD>, <&main_pmx0_range 143 3 PIN_GPIO_RANGE_IOPAD>,

View File

@ -1262,6 +1262,14 @@
&serdes0 { &serdes0 {
status = "okay"; status = "okay";
serdes0_pcie1_link: phy@0 {
reg = <0>;
cdns,num-lanes = <2>;
#phy-cells = <0>;
cdns,phy-type = <PHY_TYPE_PCIE>;
resets = <&serdes_wiz0 1>, <&serdes_wiz0 2>;
};
serdes0_usb_link: phy@3 { serdes0_usb_link: phy@3 {
reg = <3>; reg = <3>;
cdns,num-lanes = <1>; cdns,num-lanes = <1>;
@ -1386,23 +1394,6 @@
phys = <&transceiver3>; phys = <&transceiver3>;
}; };
&serdes0 {
status = "okay";
serdes0_pcie1_link: phy@0 {
reg = <0>;
cdns,num-lanes = <4>;
#phy-cells = <0>;
cdns,phy-type = <PHY_TYPE_PCIE>;
resets = <&serdes_wiz0 1>, <&serdes_wiz0 2>,
<&serdes_wiz0 3>, <&serdes_wiz0 4>;
};
};
&serdes_wiz0 {
status = "okay";
};
&pcie1_rc { &pcie1_rc {
status = "okay"; status = "okay";
num-lanes = <2>; num-lanes = <2>;

View File

@ -2755,7 +2755,7 @@
interrupts = <GIC_SPI 550 IRQ_TYPE_LEVEL_HIGH>, interrupts = <GIC_SPI 550 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 551 IRQ_TYPE_LEVEL_HIGH>; <GIC_SPI 551 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&main_udmap 0xc500>, <&main_udmap 0x4500>; dmas = <&main_udmap 0xc403>, <&main_udmap 0x4403>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&k3_clks 268 0>; clocks = <&k3_clks 268 0>;
clock-names = "fck"; clock-names = "fck";
@ -2773,7 +2773,7 @@
interrupts = <GIC_SPI 552 IRQ_TYPE_LEVEL_HIGH>, interrupts = <GIC_SPI 552 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 553 IRQ_TYPE_LEVEL_HIGH>; <GIC_SPI 553 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "tx", "rx"; interrupt-names = "tx", "rx";
dmas = <&main_udmap 0xc501>, <&main_udmap 0x4501>; dmas = <&main_udmap 0xc404>, <&main_udmap 0x4404>;
dma-names = "tx", "rx"; dma-names = "tx", "rx";
clocks = <&k3_clks 269 0>; clocks = <&k3_clks 269 0>;
clock-names = "fck"; clock-names = "fck";

View File

@ -34,7 +34,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep) unsigned long addr, pte_t *ptep)
{ {
pte_t clear; pte_t clear;
pte_t pte = *ptep; pte_t pte = ptep_get(ptep);
pte_val(clear) = (unsigned long)invalid_pte_table; pte_val(clear) = (unsigned long)invalid_pte_table;
set_pte_at(mm, addr, ptep, clear); set_pte_at(mm, addr, ptep, clear);
@ -65,7 +65,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
pte_t *ptep, pte_t pte, pte_t *ptep, pte_t pte,
int dirty) int dirty)
{ {
int changed = !pte_same(*ptep, pte); int changed = !pte_same(ptep_get(ptep), pte);
if (changed) { if (changed) {
set_pte_at(vma->vm_mm, addr, ptep, pte); set_pte_at(vma->vm_mm, addr, ptep, pte);

View File

@ -53,13 +53,13 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
{ {
pte_t *pte = virt_to_kpte(addr); pte_t *pte = virt_to_kpte(addr);
if (WARN_ON(!pte) || pte_none(*pte)) if (WARN_ON(!pte) || pte_none(ptep_get(pte)))
return false; return false;
if (protect) if (protect)
set_pte(pte, __pte(pte_val(*pte) & ~(_PAGE_VALID | _PAGE_PRESENT))); set_pte(pte, __pte(pte_val(ptep_get(pte)) & ~(_PAGE_VALID | _PAGE_PRESENT)));
else else
set_pte(pte, __pte(pte_val(*pte) | (_PAGE_VALID | _PAGE_PRESENT))); set_pte(pte, __pte(pte_val(ptep_get(pte)) | (_PAGE_VALID | _PAGE_PRESENT)));
preempt_disable(); preempt_disable();
local_flush_tlb_one(addr); local_flush_tlb_one(addr);

View File

@ -26,8 +26,6 @@
#define KVM_MAX_VCPUS 256 #define KVM_MAX_VCPUS 256
#define KVM_MAX_CPUCFG_REGS 21 #define KVM_MAX_CPUCFG_REGS 21
/* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 0
#define KVM_HALT_POLL_NS_DEFAULT 500000 #define KVM_HALT_POLL_NS_DEFAULT 500000
#define KVM_REQ_TLB_FLUSH_GPA KVM_ARCH_REQ(0) #define KVM_REQ_TLB_FLUSH_GPA KVM_ARCH_REQ(0)

View File

@ -39,9 +39,9 @@ struct kvm_steal_time {
* Hypercall interface for KVM hypervisor * Hypercall interface for KVM hypervisor
* *
* a0: function identifier * a0: function identifier
* a1-a6: args * a1-a5: args
* Return value will be placed in a0. * Return value will be placed in a0.
* Up to 6 arguments are passed in a1, a2, a3, a4, a5, a6. * Up to 5 arguments are passed in a1, a2, a3, a4, a5.
*/ */
static __always_inline long kvm_hypercall0(u64 fid) static __always_inline long kvm_hypercall0(u64 fid)
{ {

View File

@ -106,6 +106,9 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define KFENCE_AREA_START (VMEMMAP_END + 1) #define KFENCE_AREA_START (VMEMMAP_END + 1)
#define KFENCE_AREA_END (KFENCE_AREA_START + KFENCE_AREA_SIZE - 1) #define KFENCE_AREA_END (KFENCE_AREA_START + KFENCE_AREA_SIZE - 1)
#define ptep_get(ptep) READ_ONCE(*(ptep))
#define pmdp_get(pmdp) READ_ONCE(*(pmdp))
#define pte_ERROR(e) \ #define pte_ERROR(e) \
pr_err("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e)) pr_err("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e))
#ifndef __PAGETABLE_PMD_FOLDED #ifndef __PAGETABLE_PMD_FOLDED
@ -147,11 +150,6 @@ static inline int p4d_present(p4d_t p4d)
return p4d_val(p4d) != (unsigned long)invalid_pud_table; return p4d_val(p4d) != (unsigned long)invalid_pud_table;
} }
static inline void p4d_clear(p4d_t *p4dp)
{
p4d_val(*p4dp) = (unsigned long)invalid_pud_table;
}
static inline pud_t *p4d_pgtable(p4d_t p4d) static inline pud_t *p4d_pgtable(p4d_t p4d)
{ {
return (pud_t *)p4d_val(p4d); return (pud_t *)p4d_val(p4d);
@ -159,7 +157,12 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
static inline void set_p4d(p4d_t *p4d, p4d_t p4dval) static inline void set_p4d(p4d_t *p4d, p4d_t p4dval)
{ {
*p4d = p4dval; WRITE_ONCE(*p4d, p4dval);
}
static inline void p4d_clear(p4d_t *p4dp)
{
set_p4d(p4dp, __p4d((unsigned long)invalid_pud_table));
} }
#define p4d_phys(p4d) PHYSADDR(p4d_val(p4d)) #define p4d_phys(p4d) PHYSADDR(p4d_val(p4d))
@ -193,17 +196,20 @@ static inline int pud_present(pud_t pud)
return pud_val(pud) != (unsigned long)invalid_pmd_table; return pud_val(pud) != (unsigned long)invalid_pmd_table;
} }
static inline void pud_clear(pud_t *pudp)
{
pud_val(*pudp) = ((unsigned long)invalid_pmd_table);
}
static inline pmd_t *pud_pgtable(pud_t pud) static inline pmd_t *pud_pgtable(pud_t pud)
{ {
return (pmd_t *)pud_val(pud); return (pmd_t *)pud_val(pud);
} }
#define set_pud(pudptr, pudval) do { *(pudptr) = (pudval); } while (0) static inline void set_pud(pud_t *pud, pud_t pudval)
{
WRITE_ONCE(*pud, pudval);
}
static inline void pud_clear(pud_t *pudp)
{
set_pud(pudp, __pud((unsigned long)invalid_pmd_table));
}
#define pud_phys(pud) PHYSADDR(pud_val(pud)) #define pud_phys(pud) PHYSADDR(pud_val(pud))
#define pud_page(pud) (pfn_to_page(pud_phys(pud) >> PAGE_SHIFT)) #define pud_page(pud) (pfn_to_page(pud_phys(pud) >> PAGE_SHIFT))
@ -231,12 +237,15 @@ static inline int pmd_present(pmd_t pmd)
return pmd_val(pmd) != (unsigned long)invalid_pte_table; return pmd_val(pmd) != (unsigned long)invalid_pte_table;
} }
static inline void pmd_clear(pmd_t *pmdp) static inline void set_pmd(pmd_t *pmd, pmd_t pmdval)
{ {
pmd_val(*pmdp) = ((unsigned long)invalid_pte_table); WRITE_ONCE(*pmd, pmdval);
} }
#define set_pmd(pmdptr, pmdval) do { *(pmdptr) = (pmdval); } while (0) static inline void pmd_clear(pmd_t *pmdp)
{
set_pmd(pmdp, __pmd((unsigned long)invalid_pte_table));
}
#define pmd_phys(pmd) PHYSADDR(pmd_val(pmd)) #define pmd_phys(pmd) PHYSADDR(pmd_val(pmd))
@ -314,7 +323,8 @@ extern void paging_init(void);
static inline void set_pte(pte_t *ptep, pte_t pteval) static inline void set_pte(pte_t *ptep, pte_t pteval)
{ {
*ptep = pteval; WRITE_ONCE(*ptep, pteval);
if (pte_val(pteval) & _PAGE_GLOBAL) { if (pte_val(pteval) & _PAGE_GLOBAL) {
pte_t *buddy = ptep_buddy(ptep); pte_t *buddy = ptep_buddy(ptep);
/* /*
@ -341,8 +351,8 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
: [buddy] "+m" (buddy->pte), [tmp] "=&r" (tmp) : [buddy] "+m" (buddy->pte), [tmp] "=&r" (tmp)
: [global] "r" (page_global)); : [global] "r" (page_global));
#else /* !CONFIG_SMP */ #else /* !CONFIG_SMP */
if (pte_none(*buddy)) if (pte_none(ptep_get(buddy)))
pte_val(*buddy) = pte_val(*buddy) | _PAGE_GLOBAL; WRITE_ONCE(*buddy, __pte(pte_val(ptep_get(buddy)) | _PAGE_GLOBAL));
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
} }
} }
@ -350,7 +360,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{ {
/* Preserve global status for the pair */ /* Preserve global status for the pair */
if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL) if (pte_val(ptep_get(ptep_buddy(ptep))) & _PAGE_GLOBAL)
set_pte(ptep, __pte(_PAGE_GLOBAL)); set_pte(ptep, __pte(_PAGE_GLOBAL));
else else
set_pte(ptep, __pte(0)); set_pte(ptep, __pte(0));
@ -603,7 +613,7 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long address, pmd_t *pmdp) unsigned long address, pmd_t *pmdp)
{ {
pmd_t old = *pmdp; pmd_t old = pmdp_get(pmdp);
pmd_clear(pmdp); pmd_clear(pmdp);

View File

@ -66,6 +66,12 @@ void __init efi_runtime_init(void)
set_bit(EFI_RUNTIME_SERVICES, &efi.flags); set_bit(EFI_RUNTIME_SERVICES, &efi.flags);
} }
bool efi_poweroff_required(void)
{
return efi_enabled(EFI_RUNTIME_SERVICES) &&
(acpi_gbl_reduced_hardware || acpi_no_s5);
}
unsigned long __initdata screen_info_table = EFI_INVALID_TABLE_ADDR; unsigned long __initdata screen_info_table = EFI_INVALID_TABLE_ADDR;
#if defined(CONFIG_SYSFB) || defined(CONFIG_EFI_EARLYCON) #if defined(CONFIG_SYSFB) || defined(CONFIG_EFI_EARLYCON)

View File

@ -714,19 +714,19 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
* value) and then p*d_offset() walks into the target huge page instead * value) and then p*d_offset() walks into the target huge page instead
* of the old page table (sees the new value). * of the old page table (sees the new value).
*/ */
pgd = READ_ONCE(*pgd_offset(kvm->mm, hva)); pgd = pgdp_get(pgd_offset(kvm->mm, hva));
if (pgd_none(pgd)) if (pgd_none(pgd))
goto out; goto out;
p4d = READ_ONCE(*p4d_offset(&pgd, hva)); p4d = p4dp_get(p4d_offset(&pgd, hva));
if (p4d_none(p4d) || !p4d_present(p4d)) if (p4d_none(p4d) || !p4d_present(p4d))
goto out; goto out;
pud = READ_ONCE(*pud_offset(&p4d, hva)); pud = pudp_get(pud_offset(&p4d, hva));
if (pud_none(pud) || !pud_present(pud)) if (pud_none(pud) || !pud_present(pud))
goto out; goto out;
pmd = READ_ONCE(*pmd_offset(&pud, hva)); pmd = pmdp_get(pmd_offset(&pud, hva));
if (pmd_none(pmd) || !pmd_present(pmd)) if (pmd_none(pmd) || !pmd_present(pmd))
goto out; goto out;

View File

@ -39,11 +39,11 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
pmd_t *pmd = NULL; pmd_t *pmd = NULL;
pgd = pgd_offset(mm, addr); pgd = pgd_offset(mm, addr);
if (pgd_present(*pgd)) { if (pgd_present(pgdp_get(pgd))) {
p4d = p4d_offset(pgd, addr); p4d = p4d_offset(pgd, addr);
if (p4d_present(*p4d)) { if (p4d_present(p4dp_get(p4d))) {
pud = pud_offset(p4d, addr); pud = pud_offset(p4d, addr);
if (pud_present(*pud)) if (pud_present(pudp_get(pud)))
pmd = pmd_offset(pud, addr); pmd = pmd_offset(pud, addr);
} }
} }

View File

@ -141,7 +141,7 @@ void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
int __meminit vmemmap_check_pmd(pmd_t *pmd, int node, int __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
unsigned long addr, unsigned long next) unsigned long addr, unsigned long next)
{ {
int huge = pmd_val(*pmd) & _PAGE_HUGE; int huge = pmd_val(pmdp_get(pmd)) & _PAGE_HUGE;
if (huge) if (huge)
vmemmap_verify((pte_t *)pmd, node, addr, next); vmemmap_verify((pte_t *)pmd, node, addr, next);
@ -173,7 +173,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
pud_t *pud; pud_t *pud;
pmd_t *pmd; pmd_t *pmd;
if (p4d_none(*p4d)) { if (p4d_none(p4dp_get(p4d))) {
pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
if (!pud) if (!pud)
panic("%s: Failed to allocate memory\n", __func__); panic("%s: Failed to allocate memory\n", __func__);
@ -184,7 +184,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
} }
pud = pud_offset(p4d, addr); pud = pud_offset(p4d, addr);
if (pud_none(*pud)) { if (pud_none(pudp_get(pud))) {
pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
if (!pmd) if (!pmd)
panic("%s: Failed to allocate memory\n", __func__); panic("%s: Failed to allocate memory\n", __func__);
@ -195,7 +195,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
} }
pmd = pmd_offset(pud, addr); pmd = pmd_offset(pud, addr);
if (!pmd_present(*pmd)) { if (!pmd_present(pmdp_get(pmd))) {
pte_t *pte; pte_t *pte;
pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
@ -216,7 +216,7 @@ void __init __set_fixmap(enum fixed_addresses idx,
BUG_ON(idx <= FIX_HOLE || idx >= __end_of_fixed_addresses); BUG_ON(idx <= FIX_HOLE || idx >= __end_of_fixed_addresses);
ptep = populate_kernel_pte(addr); ptep = populate_kernel_pte(addr);
if (!pte_none(*ptep)) { if (!pte_none(ptep_get(ptep))) {
pte_ERROR(*ptep); pte_ERROR(*ptep);
return; return;
} }

View File

@ -105,7 +105,7 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node)
static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node, bool early) static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node, bool early)
{ {
if (__pmd_none(early, READ_ONCE(*pmdp))) { if (__pmd_none(early, pmdp_get(pmdp))) {
phys_addr_t pte_phys = early ? phys_addr_t pte_phys = early ?
__pa_symbol(kasan_early_shadow_pte) : kasan_alloc_zeroed_page(node); __pa_symbol(kasan_early_shadow_pte) : kasan_alloc_zeroed_page(node);
if (!early) if (!early)
@ -118,7 +118,7 @@ static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node,
static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node, bool early) static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node, bool early)
{ {
if (__pud_none(early, READ_ONCE(*pudp))) { if (__pud_none(early, pudp_get(pudp))) {
phys_addr_t pmd_phys = early ? phys_addr_t pmd_phys = early ?
__pa_symbol(kasan_early_shadow_pmd) : kasan_alloc_zeroed_page(node); __pa_symbol(kasan_early_shadow_pmd) : kasan_alloc_zeroed_page(node);
if (!early) if (!early)
@ -131,7 +131,7 @@ static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node,
static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node, bool early) static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node, bool early)
{ {
if (__p4d_none(early, READ_ONCE(*p4dp))) { if (__p4d_none(early, p4dp_get(p4dp))) {
phys_addr_t pud_phys = early ? phys_addr_t pud_phys = early ?
__pa_symbol(kasan_early_shadow_pud) : kasan_alloc_zeroed_page(node); __pa_symbol(kasan_early_shadow_pud) : kasan_alloc_zeroed_page(node);
if (!early) if (!early)
@ -154,7 +154,7 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
: kasan_alloc_zeroed_page(node); : kasan_alloc_zeroed_page(node);
next = addr + PAGE_SIZE; next = addr + PAGE_SIZE;
set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
} while (ptep++, addr = next, addr != end && __pte_none(early, READ_ONCE(*ptep))); } while (ptep++, addr = next, addr != end && __pte_none(early, ptep_get(ptep)));
} }
static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr, static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
@ -166,7 +166,7 @@ static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
do { do {
next = pmd_addr_end(addr, end); next = pmd_addr_end(addr, end);
kasan_pte_populate(pmdp, addr, next, node, early); kasan_pte_populate(pmdp, addr, next, node, early);
} while (pmdp++, addr = next, addr != end && __pmd_none(early, READ_ONCE(*pmdp))); } while (pmdp++, addr = next, addr != end && __pmd_none(early, pmdp_get(pmdp)));
} }
static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr, static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr,

View File

@ -128,7 +128,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot)
void set_pmd_at(struct mm_struct *mm, unsigned long addr, void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t pmd) pmd_t *pmdp, pmd_t pmd)
{ {
*pmdp = pmd; WRITE_ONCE(*pmdp, pmd);
flush_tlb_all(); flush_tlb_all();
} }

View File

@ -66,13 +66,15 @@ static inline bool vcpu_is_preempted(long cpu)
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
/* /*
* virt_spin_lock_key - enables (by default) the virt_spin_lock() hijack. * virt_spin_lock_key - disables by default the virt_spin_lock() hijack.
* *
* Native (and PV wanting native due to vCPU pinning) should disable this key. * Native (and PV wanting native due to vCPU pinning) should keep this key
* It is done in this backwards fashion to only have a single direction change, * disabled. Native does not touch the key.
* which removes ordering between native_pv_spin_init() and HV setup. *
* When in a guest then native_pv_lock_init() enables the key first and
* KVM/XEN might conditionally disable it later in the boot process again.
*/ */
DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key); DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key);
/* /*
* Shortcut for the queued_spin_lock_slowpath() function that allows * Shortcut for the queued_spin_lock_slowpath() function that allows

View File

@ -19,7 +19,7 @@
static u64 acpi_mp_wake_mailbox_paddr __ro_after_init; static u64 acpi_mp_wake_mailbox_paddr __ro_after_init;
/* Virtual address of the Multiprocessor Wakeup Structure mailbox */ /* Virtual address of the Multiprocessor Wakeup Structure mailbox */
static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox __ro_after_init; static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox;
static u64 acpi_mp_pgd __ro_after_init; static u64 acpi_mp_pgd __ro_after_init;
static u64 acpi_mp_reset_vector_paddr __ro_after_init; static u64 acpi_mp_reset_vector_paddr __ro_after_init;

View File

@ -609,7 +609,7 @@ void mtrr_save_state(void)
{ {
int first_cpu; int first_cpu;
if (!mtrr_enabled()) if (!mtrr_enabled() || !mtrr_state.have_fixed)
return; return;
first_cpu = cpumask_first(cpu_online_mask); first_cpu = cpumask_first(cpu_online_mask);

View File

@ -51,13 +51,12 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text);
DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text); DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text);
#endif #endif
DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
void __init native_pv_lock_init(void) void __init native_pv_lock_init(void)
{ {
if (IS_ENABLED(CONFIG_PARAVIRT_SPINLOCKS) && if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
!boot_cpu_has(X86_FEATURE_HYPERVISOR)) static_branch_enable(&virt_spin_lock_key);
static_branch_disable(&virt_spin_lock_key);
} }
static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)

View File

@ -241,7 +241,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
* *
* Returns a pointer to a PTE on success, or NULL on failure. * Returns a pointer to a PTE on success, or NULL on failure.
*/ */
static pte_t *pti_user_pagetable_walk_pte(unsigned long address) static pte_t *pti_user_pagetable_walk_pte(unsigned long address, bool late_text)
{ {
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO); gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
pmd_t *pmd; pmd_t *pmd;
@ -251,11 +251,16 @@ static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
if (!pmd) if (!pmd)
return NULL; return NULL;
/* We can't do anything sensible if we hit a large mapping. */ /* Large PMD mapping found */
if (pmd_leaf(*pmd)) { if (pmd_leaf(*pmd)) {
WARN_ON(1); /* Clear the PMD if we hit a large mapping from the first round */
if (late_text) {
set_pmd(pmd, __pmd(0));
} else {
WARN_ON_ONCE(1);
return NULL; return NULL;
} }
}
if (pmd_none(*pmd)) { if (pmd_none(*pmd)) {
unsigned long new_pte_page = __get_free_page(gfp); unsigned long new_pte_page = __get_free_page(gfp);
@ -283,7 +288,7 @@ static void __init pti_setup_vsyscall(void)
if (!pte || WARN_ON(level != PG_LEVEL_4K) || pte_none(*pte)) if (!pte || WARN_ON(level != PG_LEVEL_4K) || pte_none(*pte))
return; return;
target_pte = pti_user_pagetable_walk_pte(VSYSCALL_ADDR); target_pte = pti_user_pagetable_walk_pte(VSYSCALL_ADDR, false);
if (WARN_ON(!target_pte)) if (WARN_ON(!target_pte))
return; return;
@ -301,7 +306,7 @@ enum pti_clone_level {
static void static void
pti_clone_pgtable(unsigned long start, unsigned long end, pti_clone_pgtable(unsigned long start, unsigned long end,
enum pti_clone_level level) enum pti_clone_level level, bool late_text)
{ {
unsigned long addr; unsigned long addr;
@ -390,7 +395,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
return; return;
/* Allocate PTE in the user page-table */ /* Allocate PTE in the user page-table */
target_pte = pti_user_pagetable_walk_pte(addr); target_pte = pti_user_pagetable_walk_pte(addr, late_text);
if (WARN_ON(!target_pte)) if (WARN_ON(!target_pte))
return; return;
@ -452,7 +457,7 @@ static void __init pti_clone_user_shared(void)
phys_addr_t pa = per_cpu_ptr_to_phys((void *)va); phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
pte_t *target_pte; pte_t *target_pte;
target_pte = pti_user_pagetable_walk_pte(va); target_pte = pti_user_pagetable_walk_pte(va, false);
if (WARN_ON(!target_pte)) if (WARN_ON(!target_pte))
return; return;
@ -475,7 +480,7 @@ static void __init pti_clone_user_shared(void)
start = CPU_ENTRY_AREA_BASE; start = CPU_ENTRY_AREA_BASE;
end = start + (PAGE_SIZE * CPU_ENTRY_AREA_PAGES); end = start + (PAGE_SIZE * CPU_ENTRY_AREA_PAGES);
pti_clone_pgtable(start, end, PTI_CLONE_PMD); pti_clone_pgtable(start, end, PTI_CLONE_PMD, false);
} }
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
@ -492,11 +497,11 @@ static void __init pti_setup_espfix64(void)
/* /*
* Clone the populated PMDs of the entry text and force it RO. * Clone the populated PMDs of the entry text and force it RO.
*/ */
static void pti_clone_entry_text(void) static void pti_clone_entry_text(bool late)
{ {
pti_clone_pgtable((unsigned long) __entry_text_start, pti_clone_pgtable((unsigned long) __entry_text_start,
(unsigned long) __entry_text_end, (unsigned long) __entry_text_end,
PTI_LEVEL_KERNEL_IMAGE); PTI_LEVEL_KERNEL_IMAGE, late);
} }
/* /*
@ -571,7 +576,7 @@ static void pti_clone_kernel_text(void)
* pti_set_kernel_image_nonglobal() did to clear the * pti_set_kernel_image_nonglobal() did to clear the
* global bit. * global bit.
*/ */
pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE); pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE, false);
/* /*
* pti_clone_pgtable() will set the global bit in any PMDs * pti_clone_pgtable() will set the global bit in any PMDs
@ -638,8 +643,15 @@ void __init pti_init(void)
/* Undo all global bits from the init pagetables in head_64.S: */ /* Undo all global bits from the init pagetables in head_64.S: */
pti_set_kernel_image_nonglobal(); pti_set_kernel_image_nonglobal();
/* Replace some of the global bits just for shared entry text: */ /* Replace some of the global bits just for shared entry text: */
pti_clone_entry_text(); /*
* This is very early in boot. Device and Late initcalls can do
* modprobe before free_initmem() and mark_readonly(). This
* pti_clone_entry_text() allows those user-mode-helpers to function,
* but notably the text is still RW.
*/
pti_clone_entry_text(false);
pti_setup_espfix64(); pti_setup_espfix64();
pti_setup_vsyscall(); pti_setup_vsyscall();
} }
@ -656,10 +668,11 @@ void pti_finalize(void)
if (!boot_cpu_has(X86_FEATURE_PTI)) if (!boot_cpu_has(X86_FEATURE_PTI))
return; return;
/* /*
* We need to clone everything (again) that maps parts of the * This is after free_initmem() (all initcalls are done) and we've done
* kernel image. * mark_readonly(). Text is now NX which might've split some PMDs
* relative to the early clone.
*/ */
pti_clone_entry_text(); pti_clone_entry_text(true);
pti_clone_kernel_text(); pti_clone_kernel_text();
debug_checkwx_user(); debug_checkwx_user();

View File

@ -31,14 +31,6 @@ static struct workqueue_struct *kthrotld_workqueue;
#define rb_entry_tg(node) rb_entry((node), struct throtl_grp, rb_node) #define rb_entry_tg(node) rb_entry((node), struct throtl_grp, rb_node)
/* We measure latency for request size from <= 4k to >= 1M */
#define LATENCY_BUCKET_SIZE 9
struct latency_bucket {
unsigned long total_latency; /* ns / 1024 */
int samples;
};
struct throtl_data struct throtl_data
{ {
/* service tree for active throtl groups */ /* service tree for active throtl groups */
@ -116,9 +108,6 @@ static unsigned int tg_iops_limit(struct throtl_grp *tg, int rw)
return tg->iops[rw]; return tg->iops[rw];
} }
#define request_bucket_index(sectors) \
clamp_t(int, order_base_2(sectors) - 3, 0, LATENCY_BUCKET_SIZE - 1)
/** /**
* throtl_log - log debug message via blktrace * throtl_log - log debug message via blktrace
* @sq: the service_queue being reported * @sq: the service_queue being reported

View File

@ -1044,13 +1044,13 @@ static struct binder_ref *binder_get_ref_olocked(struct binder_proc *proc,
} }
/* Find the smallest unused descriptor the "slow way" */ /* Find the smallest unused descriptor the "slow way" */
static u32 slow_desc_lookup_olocked(struct binder_proc *proc) static u32 slow_desc_lookup_olocked(struct binder_proc *proc, u32 offset)
{ {
struct binder_ref *ref; struct binder_ref *ref;
struct rb_node *n; struct rb_node *n;
u32 desc; u32 desc;
desc = 1; desc = offset;
for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n)) { for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n)) {
ref = rb_entry(n, struct binder_ref, rb_node_desc); ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->data.desc > desc) if (ref->data.desc > desc)
@ -1071,21 +1071,18 @@ static int get_ref_desc_olocked(struct binder_proc *proc,
u32 *desc) u32 *desc)
{ {
struct dbitmap *dmap = &proc->dmap; struct dbitmap *dmap = &proc->dmap;
unsigned int nbits, offset;
unsigned long *new, bit; unsigned long *new, bit;
unsigned int nbits;
/* 0 is reserved for the context manager */ /* 0 is reserved for the context manager */
if (node == proc->context->binder_context_mgr_node) { offset = (node == proc->context->binder_context_mgr_node) ? 0 : 1;
*desc = 0;
return 0;
}
if (!dbitmap_enabled(dmap)) { if (!dbitmap_enabled(dmap)) {
*desc = slow_desc_lookup_olocked(proc); *desc = slow_desc_lookup_olocked(proc, offset);
return 0; return 0;
} }
if (dbitmap_acquire_first_zero_bit(dmap, &bit) == 0) { if (dbitmap_acquire_next_zero_bit(dmap, offset, &bit) == 0) {
*desc = bit; *desc = bit;
return 0; return 0;
} }

View File

@ -939,9 +939,9 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
__free_page(alloc->pages[i].page_ptr); __free_page(alloc->pages[i].page_ptr);
page_count++; page_count++;
} }
kvfree(alloc->pages);
} }
spin_unlock(&alloc->lock); spin_unlock(&alloc->lock);
kvfree(alloc->pages);
if (alloc->mm) if (alloc->mm)
mmdrop(alloc->mm); mmdrop(alloc->mm);

View File

@ -6,8 +6,7 @@
* *
* Used by the binder driver to optimize the allocation of the smallest * Used by the binder driver to optimize the allocation of the smallest
* available descriptor ID. Each bit in the bitmap represents the state * available descriptor ID. Each bit in the bitmap represents the state
* of an ID, with the exception of BIT(0) which is used exclusively to * of an ID.
* reference binder's context manager.
* *
* A dbitmap can grow or shrink as needed. This part has been designed * A dbitmap can grow or shrink as needed. This part has been designed
* considering that users might need to briefly release their locks in * considering that users might need to briefly release their locks in
@ -58,11 +57,7 @@ static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
if (bit < (dmap->nbits >> 2)) if (bit < (dmap->nbits >> 2))
return dmap->nbits >> 1; return dmap->nbits >> 1;
/* /* find_last_bit() returns dmap->nbits when no bits are set. */
* Note that find_last_bit() returns dmap->nbits when no bits
* are set. While this is technically not possible here since
* BIT(0) is always set, this check is left for extra safety.
*/
if (bit == dmap->nbits) if (bit == dmap->nbits)
return NBITS_MIN; return NBITS_MIN;
@ -132,16 +127,17 @@ dbitmap_grow(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
} }
/* /*
* Finds and sets the first zero bit in the bitmap. Upon success @bit * Finds and sets the next zero bit in the bitmap. Upon success @bit
* is populated with the index and 0 is returned. Otherwise, -ENOSPC * is populated with the index and 0 is returned. Otherwise, -ENOSPC
* is returned to indicate that a dbitmap_grow() is needed. * is returned to indicate that a dbitmap_grow() is needed.
*/ */
static inline int static inline int
dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit) dbitmap_acquire_next_zero_bit(struct dbitmap *dmap, unsigned long offset,
unsigned long *bit)
{ {
unsigned long n; unsigned long n;
n = find_first_zero_bit(dmap->map, dmap->nbits); n = find_next_zero_bit(dmap->map, dmap->nbits, offset);
if (n == dmap->nbits) if (n == dmap->nbits)
return -ENOSPC; return -ENOSPC;
@ -154,8 +150,6 @@ dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
static inline void static inline void
dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit) dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
{ {
/* BIT(0) should always set for the context manager */
if (bit)
clear_bit(bit, dmap->map); clear_bit(bit, dmap->map);
} }
@ -168,8 +162,6 @@ static inline int dbitmap_init(struct dbitmap *dmap)
} }
dmap->nbits = NBITS_MIN; dmap->nbits = NBITS_MIN;
/* BIT(0) is reserved for the context manager */
set_bit(0, dmap->map);
return 0; return 0;
} }

View File

@ -25,6 +25,7 @@
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/rcupdate.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include <linux/string_helpers.h> #include <linux/string_helpers.h>
@ -2640,6 +2641,7 @@ static const char *dev_uevent_name(const struct kobject *kobj)
static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env) static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
{ {
const struct device *dev = kobj_to_dev(kobj); const struct device *dev = kobj_to_dev(kobj);
struct device_driver *driver;
int retval = 0; int retval = 0;
/* add device node properties if present */ /* add device node properties if present */
@ -2668,8 +2670,12 @@ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
if (dev->type && dev->type->name) if (dev->type && dev->type->name)
add_uevent_var(env, "DEVTYPE=%s", dev->type->name); add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
if (dev->driver) /* Synchronize with module_remove_driver() */
add_uevent_var(env, "DRIVER=%s", dev->driver->name); rcu_read_lock();
driver = READ_ONCE(dev->driver);
if (driver)
add_uevent_var(env, "DRIVER=%s", driver->name);
rcu_read_unlock();
/* Add common DT information about the device */ /* Add common DT information about the device */
of_device_uevent(dev, env); of_device_uevent(dev, env);
@ -2739,11 +2745,8 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
if (!env) if (!env)
return -ENOMEM; return -ENOMEM;
/* Synchronize with really_probe() */
device_lock(dev);
/* let the kset specific function add its keys */ /* let the kset specific function add its keys */
retval = kset->uevent_ops->uevent(&dev->kobj, env); retval = kset->uevent_ops->uevent(&dev->kobj, env);
device_unlock(dev);
if (retval) if (retval)
goto out; goto out;

View File

@ -7,6 +7,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/rcupdate.h>
#include "base.h" #include "base.h"
static char *make_driver_name(const struct device_driver *drv) static char *make_driver_name(const struct device_driver *drv)
@ -97,6 +98,9 @@ void module_remove_driver(const struct device_driver *drv)
if (!drv) if (!drv)
return; return;
/* Synchronize with dev_uevent() */
synchronize_rcu();
sysfs_remove_link(&drv->p->kobj, "module"); sysfs_remove_link(&drv->p->kobj, "module");
if (drv->owner) if (drv->owner)

View File

@ -2160,7 +2160,7 @@ static void qca_power_shutdown(struct hci_uart *hu)
qcadev = serdev_device_get_drvdata(hu->serdev); qcadev = serdev_device_get_drvdata(hu->serdev);
power = qcadev->bt_power; power = qcadev->bt_power;
if (power->pwrseq) { if (power && power->pwrseq) {
pwrseq_power_off(power->pwrseq); pwrseq_power_off(power->pwrseq);
set_bit(QCA_BT_OFF, &qca->flags); set_bit(QCA_BT_OFF, &qca->flags);
return; return;
@ -2187,10 +2187,6 @@ static void qca_power_shutdown(struct hci_uart *hu)
} }
break; break;
case QCA_QCA6390:
pwrseq_power_off(qcadev->bt_power->pwrseq);
break;
default: default:
gpiod_set_value_cansleep(qcadev->bt_en, 0); gpiod_set_value_cansleep(qcadev->bt_en, 0);
} }
@ -2416,11 +2412,14 @@ static int qca_serdev_probe(struct serdev_device *serdev)
break; break;
case QCA_QCA6390: case QCA_QCA6390:
if (dev_of_node(&serdev->dev)) {
qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev, qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev,
"bluetooth"); "bluetooth");
if (IS_ERR(qcadev->bt_power->pwrseq)) if (IS_ERR(qcadev->bt_power->pwrseq))
return PTR_ERR(qcadev->bt_power->pwrseq); return PTR_ERR(qcadev->bt_power->pwrseq);
break; break;
}
fallthrough;
default: default:
qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable", qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",

View File

@ -421,4 +421,5 @@ static void __exit ds1620_exit(void)
module_init(ds1620_init); module_init(ds1620_init);
module_exit(ds1620_exit); module_exit(ds1620_exit);
MODULE_DESCRIPTION("Dallas Semiconductor DS1620 thermometer driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -241,6 +241,7 @@ static void __exit nwbutton_exit (void)
MODULE_AUTHOR("Alex Holden"); MODULE_AUTHOR("Alex Holden");
MODULE_DESCRIPTION("NetWinder button driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
module_init(nwbutton_init); module_init(nwbutton_init);

View File

@ -618,6 +618,7 @@ static void __exit nwflash_exit(void)
iounmap((void *)FLASH_BASE); iounmap((void *)FLASH_BASE);
} }
MODULE_DESCRIPTION("NetWinder flash memory driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
module_param(flashdebug, bool, 0644); module_param(flashdebug, bool, 0644);

View File

@ -3405,6 +3405,7 @@ static const struct x86_cpu_id intel_epp_default[] = {
*/ */
X86_MATCH_VFM(INTEL_ALDERLAKE_L, HWP_SET_DEF_BALANCE_PERF_EPP(102)), X86_MATCH_VFM(INTEL_ALDERLAKE_L, HWP_SET_DEF_BALANCE_PERF_EPP(102)),
X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)), X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)),
X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)),
X86_MATCH_VFM(INTEL_METEORLAKE_L, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE, X86_MATCH_VFM(INTEL_METEORLAKE_L, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
179, 64, 16)), 179, 64, 16)),
X86_MATCH_VFM(INTEL_ARROWLAKE, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE, X86_MATCH_VFM(INTEL_ARROWLAKE, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,

View File

@ -1444,5 +1444,6 @@ static void fsi_exit(void)
} }
module_exit(fsi_exit); module_exit(fsi_exit);
module_param(discard_errors, int, 0664); module_param(discard_errors, int, 0664);
MODULE_DESCRIPTION("FSI core driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_PARM_DESC(discard_errors, "Don't invoke error handling on bus accesses"); MODULE_PARM_DESC(discard_errors, "Don't invoke error handling on bus accesses");

View File

@ -670,4 +670,5 @@ static struct platform_driver fsi_master_aspeed_driver = {
}; };
module_platform_driver(fsi_master_aspeed_driver); module_platform_driver(fsi_master_aspeed_driver);
MODULE_DESCRIPTION("FSI master driver for AST2600");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0+ // SPDX-License-Identifier: GPL-2.0+
// Copyright 2018 IBM Corp // Copyright 2018 IBM Corp
/* /*
* A FSI master controller, using a simple GPIO bit-banging interface * A FSI master based on Aspeed ColdFire coprocessor
*/ */
#include <linux/crc4.h> #include <linux/crc4.h>
@ -1438,5 +1438,6 @@ static struct platform_driver fsi_master_acf = {
}; };
module_platform_driver(fsi_master_acf); module_platform_driver(fsi_master_acf);
MODULE_DESCRIPTION("A FSI master based on Aspeed ColdFire coprocessor");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_FIRMWARE(FW_FILE_NAME); MODULE_FIRMWARE(FW_FILE_NAME);

View File

@ -892,4 +892,5 @@ static struct platform_driver fsi_master_gpio_driver = {
}; };
module_platform_driver(fsi_master_gpio_driver); module_platform_driver(fsi_master_gpio_driver);
MODULE_DESCRIPTION("A FSI master controller, using a simple GPIO bit-banging interface");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -295,4 +295,5 @@ static struct fsi_driver hub_master_driver = {
}; };
module_fsi_driver(hub_master_driver); module_fsi_driver(hub_master_driver);
MODULE_DESCRIPTION("FSI hub master driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -625,4 +625,5 @@ static void scom_exit(void)
module_init(scom_init); module_init(scom_init);
module_exit(scom_exit); module_exit(scom_exit);
MODULE_DESCRIPTION("SCOM FSI Client device driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -156,6 +156,8 @@ struct amdgpu_gmc_funcs {
uint64_t addr, uint64_t *flags); uint64_t addr, uint64_t *flags);
/* get the amount of memory used by the vbios for pre-OS console */ /* get the amount of memory used by the vbios for pre-OS console */
unsigned int (*get_vbios_fb_size)(struct amdgpu_device *adev); unsigned int (*get_vbios_fb_size)(struct amdgpu_device *adev);
/* get the DCC buffer alignment */
unsigned int (*get_dcc_alignment)(struct amdgpu_device *adev);
enum amdgpu_memory_partition (*query_mem_partition_mode)( enum amdgpu_memory_partition (*query_mem_partition_mode)(
struct amdgpu_device *adev); struct amdgpu_device *adev);
@ -363,6 +365,10 @@ struct amdgpu_gmc {
(adev)->gmc.gmc_funcs->override_vm_pte_flags \ (adev)->gmc.gmc_funcs->override_vm_pte_flags \
((adev), (vm), (addr), (pte_flags)) ((adev), (vm), (addr), (pte_flags))
#define amdgpu_gmc_get_vbios_fb_size(adev) (adev)->gmc.gmc_funcs->get_vbios_fb_size((adev)) #define amdgpu_gmc_get_vbios_fb_size(adev) (adev)->gmc.gmc_funcs->get_vbios_fb_size((adev))
#define amdgpu_gmc_get_dcc_alignment(adev) ({ \
typeof(adev) _adev = (adev); \
_adev->gmc.gmc_funcs->get_dcc_alignment(_adev); \
})
/** /**
* amdgpu_gmc_vram_full_visible - Check if full VRAM is visible through the BAR * amdgpu_gmc_vram_full_visible - Check if full VRAM is visible through the BAR

View File

@ -264,9 +264,8 @@ amdgpu_job_prepare_job(struct drm_sched_job *sched_job,
struct dma_fence *fence = NULL; struct dma_fence *fence = NULL;
int r; int r;
/* Ignore soft recovered fences here */
r = drm_sched_entity_error(s_entity); r = drm_sched_entity_error(s_entity);
if (r && r != -ENODATA) if (r)
goto error; goto error;
if (!fence && job->gang_submit) if (!fence && job->gang_submit)

View File

@ -456,6 +456,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
u64 vis_usage = 0, max_bytes, min_block_size; u64 vis_usage = 0, max_bytes, min_block_size;
struct amdgpu_vram_mgr_resource *vres; struct amdgpu_vram_mgr_resource *vres;
u64 size, remaining_size, lpfn, fpfn; u64 size, remaining_size, lpfn, fpfn;
unsigned int adjust_dcc_size = 0;
struct drm_buddy *mm = &mgr->mm; struct drm_buddy *mm = &mgr->mm;
struct drm_buddy_block *block; struct drm_buddy_block *block;
unsigned long pages_per_block; unsigned long pages_per_block;
@ -511,7 +512,19 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
/* Allocate blocks in desired range */ /* Allocate blocks in desired range */
vres->flags |= DRM_BUDDY_RANGE_ALLOCATION; vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC &&
adev->gmc.gmc_funcs->get_dcc_alignment)
adjust_dcc_size = amdgpu_gmc_get_dcc_alignment(adev);
remaining_size = (u64)vres->base.size; remaining_size = (u64)vres->base.size;
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
unsigned int dcc_size;
dcc_size = roundup_pow_of_two(vres->base.size + adjust_dcc_size);
remaining_size = (u64)dcc_size;
vres->flags |= DRM_BUDDY_TRIM_DISABLE;
}
mutex_lock(&mgr->lock); mutex_lock(&mgr->lock);
while (remaining_size) { while (remaining_size) {
@ -521,7 +534,10 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
min_block_size = mgr->default_page_size; min_block_size = mgr->default_page_size;
size = remaining_size; size = remaining_size;
if ((size >= (u64)pages_per_block << PAGE_SHIFT) &&
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size)
min_block_size = size;
else if ((size >= (u64)pages_per_block << PAGE_SHIFT) &&
!(size & (((u64)pages_per_block << PAGE_SHIFT) - 1))) !(size & (((u64)pages_per_block << PAGE_SHIFT) - 1)))
min_block_size = (u64)pages_per_block << PAGE_SHIFT; min_block_size = (u64)pages_per_block << PAGE_SHIFT;
@ -553,6 +569,22 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
} }
mutex_unlock(&mgr->lock); mutex_unlock(&mgr->lock);
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
struct drm_buddy_block *dcc_block;
unsigned long dcc_start;
u64 trim_start;
dcc_block = amdgpu_vram_mgr_first_block(&vres->blocks);
/* Adjust the start address for DCC buffers only */
dcc_start =
roundup((unsigned long)amdgpu_vram_mgr_block_start(dcc_block),
adjust_dcc_size);
trim_start = (u64)dcc_start;
drm_buddy_block_trim(mm, &trim_start,
(u64)vres->base.size,
&vres->blocks);
}
vres->base.start = 0; vres->base.start = 0;
size = max_t(u64, amdgpu_vram_mgr_blocks_size(&vres->blocks), size = max_t(u64, amdgpu_vram_mgr_blocks_size(&vres->blocks),
vres->base.size); vres->base.size);

View File

@ -202,6 +202,12 @@ static const struct amdgpu_hwip_reg_entry gc_gfx_queue_reg_list_12[] = {
SOC15_REG_ENTRY_STR(GC, 0, regCP_IB1_BUFSZ) SOC15_REG_ENTRY_STR(GC, 0, regCP_IB1_BUFSZ)
}; };
static const struct soc15_reg_golden golden_settings_gc_12_0[] = {
SOC15_REG_GOLDEN_VALUE(GC, 0, regDB_MEM_CONFIG, 0x0000000f, 0x0000000f),
SOC15_REG_GOLDEN_VALUE(GC, 0, regCB_HW_CONTROL_1, 0x03000000, 0x03000000),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL5, 0x00000070, 0x00000020)
};
#define DEFAULT_SH_MEM_CONFIG \ #define DEFAULT_SH_MEM_CONFIG \
((SH_MEM_ADDRESS_MODE_64 << SH_MEM_CONFIG__ADDRESS_MODE__SHIFT) | \ ((SH_MEM_ADDRESS_MODE_64 << SH_MEM_CONFIG__ADDRESS_MODE__SHIFT) | \
(SH_MEM_ALIGNMENT_MODE_UNALIGNED << SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) | \ (SH_MEM_ALIGNMENT_MODE_UNALIGNED << SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) | \
@ -3432,6 +3438,24 @@ static void gfx_v12_0_disable_gpa_mode(struct amdgpu_device *adev)
WREG32_SOC15(GC, 0, regCPG_PSP_DEBUG, data); WREG32_SOC15(GC, 0, regCPG_PSP_DEBUG, data);
} }
static void gfx_v12_0_init_golden_registers(struct amdgpu_device *adev)
{
if (amdgpu_sriov_vf(adev))
return;
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(12, 0, 0):
case IP_VERSION(12, 0, 1):
if (adev->rev_id == 0)
soc15_program_register_sequence(adev,
golden_settings_gc_12_0,
(const u32)ARRAY_SIZE(golden_settings_gc_12_0));
break;
default:
break;
}
}
static int gfx_v12_0_hw_init(void *handle) static int gfx_v12_0_hw_init(void *handle)
{ {
int r; int r;
@ -3472,6 +3496,9 @@ static int gfx_v12_0_hw_init(void *handle)
} }
} }
if (!amdgpu_emu_mode)
gfx_v12_0_init_golden_registers(adev);
adev->gfx.is_poweron = true; adev->gfx.is_poweron = true;
if (get_gb_addr_config(adev)) if (get_gb_addr_config(adev))

View File

@ -542,6 +542,23 @@ static unsigned gmc_v12_0_get_vbios_fb_size(struct amdgpu_device *adev)
return 0; return 0;
} }
static unsigned int gmc_v12_0_get_dcc_alignment(struct amdgpu_device *adev)
{
unsigned int max_tex_channel_caches, alignment;
if (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(12, 0, 0) &&
amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(12, 0, 1))
return 0;
max_tex_channel_caches = adev->gfx.config.max_texture_channel_caches;
if (is_power_of_2(max_tex_channel_caches))
alignment = (unsigned int)(max_tex_channel_caches / SZ_4);
else
alignment = roundup_pow_of_two(max_tex_channel_caches);
return (unsigned int)(alignment * max_tex_channel_caches * SZ_1K);
}
static const struct amdgpu_gmc_funcs gmc_v12_0_gmc_funcs = { static const struct amdgpu_gmc_funcs gmc_v12_0_gmc_funcs = {
.flush_gpu_tlb = gmc_v12_0_flush_gpu_tlb, .flush_gpu_tlb = gmc_v12_0_flush_gpu_tlb,
.flush_gpu_tlb_pasid = gmc_v12_0_flush_gpu_tlb_pasid, .flush_gpu_tlb_pasid = gmc_v12_0_flush_gpu_tlb_pasid,
@ -551,6 +568,7 @@ static const struct amdgpu_gmc_funcs gmc_v12_0_gmc_funcs = {
.get_vm_pde = gmc_v12_0_get_vm_pde, .get_vm_pde = gmc_v12_0_get_vm_pde,
.get_vm_pte = gmc_v12_0_get_vm_pte, .get_vm_pte = gmc_v12_0_get_vm_pte,
.get_vbios_fb_size = gmc_v12_0_get_vbios_fb_size, .get_vbios_fb_size = gmc_v12_0_get_vbios_fb_size,
.get_dcc_alignment = gmc_v12_0_get_dcc_alignment,
}; };
static void gmc_v12_0_set_gmc_funcs(struct amdgpu_device *adev) static void gmc_v12_0_set_gmc_funcs(struct amdgpu_device *adev)

View File

@ -80,7 +80,8 @@ static uint32_t mmhub_v4_1_0_get_invalidate_req(unsigned int vmid,
/* invalidate using legacy mode on vmid*/ /* invalidate using legacy mode on vmid*/
req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ,
PER_VMID_INVALIDATE_REQ, 1 << vmid); PER_VMID_INVALIDATE_REQ, 1 << vmid);
req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, flush_type); /* Only use legacy inv on mmhub side */
req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1); req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1); req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1); req = REG_SET_FIELD(req, MMVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);

View File

@ -1575,8 +1575,7 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) | ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) |
SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) | SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) |
SDMA_PKT_COPY_LINEAR_HEADER_TMZ((copy_flags & AMDGPU_COPY_FLAGS_TMZ) ? 1 : 0) | SDMA_PKT_COPY_LINEAR_HEADER_TMZ((copy_flags & AMDGPU_COPY_FLAGS_TMZ) ? 1 : 0) |
SDMA_PKT_COPY_LINEAR_HEADER_CPV((copy_flags & SDMA_PKT_COPY_LINEAR_HEADER_CPV(1);
(AMDGPU_COPY_FLAGS_READ_DECOMPRESSED | AMDGPU_COPY_FLAGS_WRITE_COMPRESSED)) ? 1 : 0);
ib->ptr[ib->length_dw++] = byte_count - 1; ib->ptr[ib->length_dw++] = byte_count - 1;
ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */ ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */
@ -1590,6 +1589,8 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
((copy_flags & AMDGPU_COPY_FLAGS_READ_DECOMPRESSED) ? SDMA_DCC_READ_CM(2) : 0) | ((copy_flags & AMDGPU_COPY_FLAGS_READ_DECOMPRESSED) ? SDMA_DCC_READ_CM(2) : 0) |
((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(1) : 0) | ((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(1) : 0) |
SDMA_DCC_MAX_COM(max_com) | SDMA_DCC_MAX_UCOM(1); SDMA_DCC_MAX_COM(max_com) | SDMA_DCC_MAX_UCOM(1);
else
ib->ptr[ib->length_dw++] = 0;
} }
/** /**
@ -1616,7 +1617,7 @@ static void sdma_v7_0_emit_fill_buffer(struct amdgpu_ib *ib,
static const struct amdgpu_buffer_funcs sdma_v7_0_buffer_funcs = { static const struct amdgpu_buffer_funcs sdma_v7_0_buffer_funcs = {
.copy_max_bytes = 0x400000, .copy_max_bytes = 0x400000,
.copy_num_dw = 7, .copy_num_dw = 8,
.emit_copy_buffer = sdma_v7_0_emit_copy_buffer, .emit_copy_buffer = sdma_v7_0_emit_copy_buffer,
.fill_max_bytes = 0x400000, .fill_max_bytes = 0x400000,
.fill_num_dw = 5, .fill_num_dw = 5,

View File

@ -1270,6 +1270,9 @@ static bool is_dsc_need_re_compute(
} }
} }
if (new_stream_on_link_num == 0)
return false;
/* check current_state if there stream on link but it is not in /* check current_state if there stream on link but it is not in
* new request state * new request state
*/ */

View File

@ -185,8 +185,7 @@ static bool dmub_replay_copy_settings(struct dmub_replay *dmub,
else else
copy_settings_data->flags.bitfields.force_wakeup_by_tps3 = 0; copy_settings_data->flags.bitfields.force_wakeup_by_tps3 = 0;
dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
return true; return true;
} }

View File

@ -83,6 +83,8 @@ CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn31/display_rq_dlg_calc_31.o := $(dml_rcfla
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_rcflags) CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_rcflags) CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_rcflags) CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn314/display_mode_vba_314.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn314/display_rq_dlg_calc_314.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_rcflags) CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_rcflags) CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_rcflags)
CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dsc/rc_calc_fpu.o := $(dml_rcflags) CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dsc/rc_calc_fpu.o := $(dml_rcflags)

View File

@ -1402,6 +1402,8 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
if (hubbub && hubp) { if (hubbub && hubp) {
if (hubbub->funcs->program_det_size) if (hubbub->funcs->program_det_size)
hubbub->funcs->program_det_size(hubbub, hubp->inst, 0); hubbub->funcs->program_det_size(hubbub, hubp->inst, 0);
if (hubbub->funcs->program_det_segments)
hubbub->funcs->program_det_segments(hubbub, hubp->inst, 0);
} }
} }

View File

@ -771,6 +771,8 @@ void dcn35_init_pipes(struct dc *dc, struct dc_state *context)
if (hubbub && hubp) { if (hubbub && hubp) {
if (hubbub->funcs->program_det_size) if (hubbub->funcs->program_det_size)
hubbub->funcs->program_det_size(hubbub, hubp->inst, 0); hubbub->funcs->program_det_size(hubbub, hubp->inst, 0);
if (hubbub->funcs->program_det_segments)
hubbub->funcs->program_det_segments(hubbub, hubp->inst, 0);
} }
} }

View File

@ -723,6 +723,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.min_prefetch_in_strobe_ns = 60000, // 60us .min_prefetch_in_strobe_ns = 60000, // 60us
.disable_unbounded_requesting = false, .disable_unbounded_requesting = false,
.enable_legacy_fast_update = false, .enable_legacy_fast_update = false,
.dcc_meta_propagation_delay_us = 10,
.fams2_config = { .fams2_config = {
.bits = { .bits = {
.enable = true, .enable = true,

View File

@ -138,7 +138,9 @@ void dcn401_prepare_mcache_programming(struct dc *dc, struct dc_state *context);
SRI_ARR(DCHUBP_MALL_CONFIG, HUBP, id), \ SRI_ARR(DCHUBP_MALL_CONFIG, HUBP, id), \
SRI_ARR(DCHUBP_VMPG_CONFIG, HUBP, id), \ SRI_ARR(DCHUBP_VMPG_CONFIG, HUBP, id), \
SRI_ARR(UCLK_PSTATE_FORCE, HUBPREQ, id), \ SRI_ARR(UCLK_PSTATE_FORCE, HUBPREQ, id), \
HUBP_3DLUT_FL_REG_LIST_DCN401(id) HUBP_3DLUT_FL_REG_LIST_DCN401(id), \
SRI_ARR(DCSURF_VIEWPORT_MCACHE_SPLIT_COORDINATE, HUBP, id), \
SRI_ARR(DCHUBP_MCACHEID_CONFIG, HUBP, id)
/* ABM */ /* ABM */
#define ABM_DCN401_REG_LIST_RI(id) \ #define ABM_DCN401_REG_LIST_RI(id) \

View File

@ -27,7 +27,8 @@
#pragma pack(push, 1) #pragma pack(push, 1)
#define SMU_14_0_2_TABLE_FORMAT_REVISION 3 #define SMU_14_0_2_TABLE_FORMAT_REVISION 23
#define SMU_14_0_2_CUSTOM_TABLE_FORMAT_REVISION 1
// POWERPLAYTABLE::ulPlatformCaps // POWERPLAYTABLE::ulPlatformCaps
#define SMU_14_0_2_PP_PLATFORM_CAP_POWERPLAY 0x1 // This cap indicates whether CCC need to show Powerplay page. #define SMU_14_0_2_PP_PLATFORM_CAP_POWERPLAY 0x1 // This cap indicates whether CCC need to show Powerplay page.
@ -43,6 +44,7 @@
#define SMU_14_0_2_PP_THERMALCONTROLLER_NONE 0 #define SMU_14_0_2_PP_THERMALCONTROLLER_NONE 0
#define SMU_14_0_2_PP_OVERDRIVE_VERSION 0x1 // TODO: FIX OverDrive Version TBD #define SMU_14_0_2_PP_OVERDRIVE_VERSION 0x1 // TODO: FIX OverDrive Version TBD
#define SMU_14_0_2_PP_CUSTOM_OVERDRIVE_VERSION 0x1
#define SMU_14_0_2_PP_POWERSAVINGCLOCK_VERSION 0x01 // Power Saving Clock Table Version 1.00 #define SMU_14_0_2_PP_POWERSAVINGCLOCK_VERSION 0x01 // Power Saving Clock Table Version 1.00
enum SMU_14_0_2_OD_SW_FEATURE_CAP enum SMU_14_0_2_OD_SW_FEATURE_CAP
@ -107,6 +109,7 @@ enum SMU_14_0_2_PWRMODE_SETTING
SMU_14_0_2_PMSETTING_ACOUSTIC_LIMIT_RPM_BALANCE, SMU_14_0_2_PMSETTING_ACOUSTIC_LIMIT_RPM_BALANCE,
SMU_14_0_2_PMSETTING_ACOUSTIC_LIMIT_RPM_TURBO, SMU_14_0_2_PMSETTING_ACOUSTIC_LIMIT_RPM_TURBO,
SMU_14_0_2_PMSETTING_ACOUSTIC_LIMIT_RPM_RAGE, SMU_14_0_2_PMSETTING_ACOUSTIC_LIMIT_RPM_RAGE,
SMU_14_0_2_PMSETTING_COUNT
}; };
#define SMU_14_0_2_MAX_PMSETTING 32 // Maximum Number of PowerMode Settings #define SMU_14_0_2_MAX_PMSETTING 32 // Maximum Number of PowerMode Settings
@ -127,16 +130,23 @@ struct smu_14_0_2_overdrive_table
int16_t pm_setting[SMU_14_0_2_MAX_PMSETTING]; // Optimized power mode feature settings int16_t pm_setting[SMU_14_0_2_MAX_PMSETTING]; // Optimized power mode feature settings
}; };
enum smu_14_0_3_pptable_source {
PPTABLE_SOURCE_IFWI = 0,
PPTABLE_SOURCE_DRIVER_HARDCODED = 1,
PPTABLE_SOURCE_PPGEN_REGISTRY = 2,
PPTABLE_SOURCE_MAX = PPTABLE_SOURCE_PPGEN_REGISTRY,
};
struct smu_14_0_2_powerplay_table struct smu_14_0_2_powerplay_table
{ {
struct atom_common_table_header header; // header.format_revision = 3 (HAS TO MATCH SMU_14_0_2_TABLE_FORMAT_REVISION), header.content_revision = ? structuresize is calculated by PPGen. struct atom_common_table_header header; // header.format_revision = 3 (HAS TO MATCH SMU_14_0_2_TABLE_FORMAT_REVISION), header.content_revision = ? structuresize is calculated by PPGen.
uint8_t table_revision; // PPGen use only: table_revision = 3 uint8_t table_revision; // PPGen use only: table_revision = 3
uint8_t padding; // Padding 1 byte to align table_size offset to 6 bytes (pmfw_start_offset, for PMFW to know the starting offset of PPTable_t). uint8_t pptable_source; // PPGen UI dropdown box
uint16_t pmfw_pptable_start_offset; // The start offset of the pmfw portion. i.e. start of PPTable_t (start of SkuTable_t) uint16_t pmfw_pptable_start_offset; // The start offset of the pmfw portion. i.e. start of PPTable_t (start of SkuTable_t)
uint16_t pmfw_pptable_size; // The total size of pmfw_pptable, i.e PPTable_t. uint16_t pmfw_pptable_size; // The total size of pmfw_pptable, i.e PPTable_t.
uint16_t pmfw_pfe_table_start_offset; // The start offset of the PFE_Settings_t within pmfw_pptable. uint16_t pmfw_sku_table_start_offset; // DO NOT CHANGE ORDER; The absolute start offset of the SkuTable_t (within smu_14_0_3_powerplay_table).
uint16_t pmfw_pfe_table_size; // The size of PFE_Settings_t. uint16_t pmfw_sku_table_size; // DO NOT CHANGE ORDER; The size of SkuTable_t.
uint16_t pmfw_board_table_start_offset; // The start offset of the BoardTable_t within pmfw_pptable. uint16_t pmfw_board_table_start_offset; // The start offset of the BoardTable_t
uint16_t pmfw_board_table_size; // The size of BoardTable_t. uint16_t pmfw_board_table_size; // The size of BoardTable_t.
uint16_t pmfw_custom_sku_table_start_offset; // The start offset of the CustomSkuTable_t within pmfw_pptable. uint16_t pmfw_custom_sku_table_start_offset; // The start offset of the CustomSkuTable_t within pmfw_pptable.
uint16_t pmfw_custom_sku_table_size; // The size of the CustomSkuTable_t. uint16_t pmfw_custom_sku_table_size; // The size of the CustomSkuTable_t.
@ -159,6 +169,36 @@ struct smu_14_0_2_powerplay_table
PPTable_t smc_pptable; // PPTable_t in driver_if.h -- as requested by PMFW, this offset should start at a 32-byte boundary, and the table_size above should remain at offset=6 bytes PPTable_t smc_pptable; // PPTable_t in driver_if.h -- as requested by PMFW, this offset should start at a 32-byte boundary, and the table_size above should remain at offset=6 bytes
}; };
enum SMU_14_0_2_CUSTOM_OD_SW_FEATURE_CAP {
SMU_14_0_2_CUSTOM_ODCAP_POWER_MODE = 0,
SMU_14_0_2_CUSTOM_ODCAP_COUNT
};
enum SMU_14_0_2_CUSTOM_OD_FEATURE_SETTING_ID {
SMU_14_0_2_CUSTOM_ODSETTING_POWER_MODE = 0,
SMU_14_0_2_CUSTOM_ODSETTING_COUNT,
};
struct smu_14_0_2_custom_overdrive_table {
uint8_t revision;
uint8_t reserve[3];
uint8_t cap[SMU_14_0_2_CUSTOM_ODCAP_COUNT];
int32_t max[SMU_14_0_2_CUSTOM_ODSETTING_COUNT];
int32_t min[SMU_14_0_2_CUSTOM_ODSETTING_COUNT];
int16_t pm_setting[SMU_14_0_2_PMSETTING_COUNT];
};
struct smu_14_0_3_custom_powerplay_table {
uint8_t custom_table_revision;
uint16_t custom_table_size;
uint16_t custom_sku_table_offset;
uint32_t custom_platform_caps;
uint16_t software_shutdown_temp;
struct smu_14_0_2_custom_overdrive_table custom_overdrive_table;
uint32_t reserve[8];
CustomSkuTable_t custom_sku_table_pmfw;
};
#pragma pack(pop) #pragma pack(pop)
#endif #endif

View File

@ -1071,23 +1071,16 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
} }
if (async_flip && if (async_flip &&
prop != config->prop_fb_id && (plane_state->plane->type != DRM_PLANE_TYPE_PRIMARY ||
(prop != config->prop_fb_id &&
prop != config->prop_in_fence_fd && prop != config->prop_in_fence_fd &&
prop != config->prop_fb_damage_clips) { prop != config->prop_fb_damage_clips))) {
ret = drm_atomic_plane_get_property(plane, plane_state, ret = drm_atomic_plane_get_property(plane, plane_state,
prop, &old_val); prop, &old_val);
ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop); ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop);
break; break;
} }
if (async_flip && plane_state->plane->type != DRM_PLANE_TYPE_PRIMARY) {
drm_dbg_atomic(prop->dev,
"[OBJECT:%d] Only primary planes can be changed during async flip\n",
obj->id);
ret = -EINVAL;
break;
}
ret = drm_atomic_plane_set_property(plane, ret = drm_atomic_plane_set_property(plane,
plane_state, file_priv, plane_state, file_priv,
prop, prop_value); prop, prop_value);

View File

@ -443,10 +443,8 @@ struct drm_connector *drm_bridge_connector_init(struct drm_device *drm,
panel_bridge = bridge; panel_bridge = bridge;
} }
if (connector_type == DRM_MODE_CONNECTOR_Unknown) { if (connector_type == DRM_MODE_CONNECTOR_Unknown)
kfree(bridge_connector);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
}
if (bridge_connector->bridge_hdmi) if (bridge_connector->bridge_hdmi)
ret = drmm_connector_hdmi_init(drm, connector, ret = drmm_connector_hdmi_init(drm, connector,
@ -461,10 +459,8 @@ struct drm_connector *drm_bridge_connector_init(struct drm_device *drm,
ret = drmm_connector_init(drm, connector, ret = drmm_connector_init(drm, connector,
&drm_bridge_connector_funcs, &drm_bridge_connector_funcs,
connector_type, ddc); connector_type, ddc);
if (ret) { if (ret)
kfree(bridge_connector);
return ERR_PTR(ret); return ERR_PTR(ret);
}
drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs); drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs);

View File

@ -851,6 +851,7 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm,
* drm_buddy_block_trim - free unused pages * drm_buddy_block_trim - free unused pages
* *
* @mm: DRM buddy manager * @mm: DRM buddy manager
* @start: start address to begin the trimming.
* @new_size: original size requested * @new_size: original size requested
* @blocks: Input and output list of allocated blocks. * @blocks: Input and output list of allocated blocks.
* MUST contain single block as input to be trimmed. * MUST contain single block as input to be trimmed.
@ -866,11 +867,13 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm,
* 0 on success, error code on failure. * 0 on success, error code on failure.
*/ */
int drm_buddy_block_trim(struct drm_buddy *mm, int drm_buddy_block_trim(struct drm_buddy *mm,
u64 *start,
u64 new_size, u64 new_size,
struct list_head *blocks) struct list_head *blocks)
{ {
struct drm_buddy_block *parent; struct drm_buddy_block *parent;
struct drm_buddy_block *block; struct drm_buddy_block *block;
u64 block_start, block_end;
LIST_HEAD(dfs); LIST_HEAD(dfs);
u64 new_start; u64 new_start;
int err; int err;
@ -882,6 +885,9 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
struct drm_buddy_block, struct drm_buddy_block,
link); link);
block_start = drm_buddy_block_offset(block);
block_end = block_start + drm_buddy_block_size(mm, block);
if (WARN_ON(!drm_buddy_block_is_allocated(block))) if (WARN_ON(!drm_buddy_block_is_allocated(block)))
return -EINVAL; return -EINVAL;
@ -894,6 +900,20 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
if (new_size == drm_buddy_block_size(mm, block)) if (new_size == drm_buddy_block_size(mm, block))
return 0; return 0;
new_start = block_start;
if (start) {
new_start = *start;
if (new_start < block_start)
return -EINVAL;
if (!IS_ALIGNED(new_start, mm->chunk_size))
return -EINVAL;
if (range_overflows(new_start, new_size, block_end))
return -EINVAL;
}
list_del(&block->link); list_del(&block->link);
mark_free(mm, block); mark_free(mm, block);
mm->avail += drm_buddy_block_size(mm, block); mm->avail += drm_buddy_block_size(mm, block);
@ -904,7 +924,6 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
parent = block->parent; parent = block->parent;
block->parent = NULL; block->parent = NULL;
new_start = drm_buddy_block_offset(block);
list_add(&block->tmp_link, &dfs); list_add(&block->tmp_link, &dfs);
err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL); err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL);
if (err) { if (err) {
@ -1066,7 +1085,8 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
} while (1); } while (1);
/* Trim the allocated block to the required size */ /* Trim the allocated block to the required size */
if (original_size != size) { if (!(flags & DRM_BUDDY_TRIM_DISABLE) &&
original_size != size) {
struct list_head *trim_list; struct list_head *trim_list;
LIST_HEAD(temp); LIST_HEAD(temp);
u64 trim_size; u64 trim_size;
@ -1083,6 +1103,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
} }
drm_buddy_block_trim(mm, drm_buddy_block_trim(mm,
NULL,
trim_size, trim_size,
trim_list); trim_list);

View File

@ -880,6 +880,11 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
kfree(modeset->mode); kfree(modeset->mode);
modeset->mode = drm_mode_duplicate(dev, mode); modeset->mode = drm_mode_duplicate(dev, mode);
if (!modeset->mode) {
ret = -ENOMEM;
break;
}
drm_connector_get(connector); drm_connector_get(connector);
modeset->connectors[modeset->num_connectors++] = connector; modeset->connectors[modeset->num_connectors++] = connector;
modeset->x = offset->x; modeset->x = offset->x;

View File

@ -1449,6 +1449,9 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
static int cnp_num_backlight_controllers(struct drm_i915_private *i915) static int cnp_num_backlight_controllers(struct drm_i915_private *i915)
{ {
if (INTEL_PCH_TYPE(i915) >= PCH_MTL)
return 2;
if (INTEL_PCH_TYPE(i915) >= PCH_DG1) if (INTEL_PCH_TYPE(i915) >= PCH_DG1)
return 1; return 1;

View File

@ -351,6 +351,9 @@ static int intel_num_pps(struct drm_i915_private *i915)
if (IS_GEMINILAKE(i915) || IS_BROXTON(i915)) if (IS_GEMINILAKE(i915) || IS_BROXTON(i915))
return 2; return 2;
if (INTEL_PCH_TYPE(i915) >= PCH_MTL)
return 2;
if (INTEL_PCH_TYPE(i915) >= PCH_DG1) if (INTEL_PCH_TYPE(i915) >= PCH_DG1)
return 1; return 1;

View File

@ -290,6 +290,41 @@ out:
return i915_error_to_vmf_fault(err); return i915_error_to_vmf_fault(err);
} }
static void set_address_limits(struct vm_area_struct *area,
struct i915_vma *vma,
unsigned long obj_offset,
unsigned long *start_vaddr,
unsigned long *end_vaddr)
{
unsigned long vm_start, vm_end, vma_size; /* user's memory parameters */
long start, end; /* memory boundaries */
/*
* Let's move into the ">> PAGE_SHIFT"
* domain to be sure not to lose bits
*/
vm_start = area->vm_start >> PAGE_SHIFT;
vm_end = area->vm_end >> PAGE_SHIFT;
vma_size = vma->size >> PAGE_SHIFT;
/*
* Calculate the memory boundaries by considering the offset
* provided by the user during memory mapping and the offset
* provided for the partial mapping.
*/
start = vm_start;
start -= obj_offset;
start += vma->gtt_view.partial.offset;
end = start + vma_size;
start = max_t(long, start, vm_start);
end = min_t(long, end, vm_end);
/* Let's move back into the "<< PAGE_SHIFT" domain */
*start_vaddr = (unsigned long)start << PAGE_SHIFT;
*end_vaddr = (unsigned long)end << PAGE_SHIFT;
}
static vm_fault_t vm_fault_gtt(struct vm_fault *vmf) static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
{ {
#define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT) #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
@ -302,14 +337,18 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
struct i915_ggtt *ggtt = to_gt(i915)->ggtt; struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
bool write = area->vm_flags & VM_WRITE; bool write = area->vm_flags & VM_WRITE;
struct i915_gem_ww_ctx ww; struct i915_gem_ww_ctx ww;
unsigned long obj_offset;
unsigned long start, end; /* memory boundaries */
intel_wakeref_t wakeref; intel_wakeref_t wakeref;
struct i915_vma *vma; struct i915_vma *vma;
pgoff_t page_offset; pgoff_t page_offset;
unsigned long pfn;
int srcu; int srcu;
int ret; int ret;
/* We don't use vmf->pgoff since that has the fake offset */ obj_offset = area->vm_pgoff - drm_vma_node_start(&mmo->vma_node);
page_offset = (vmf->address - area->vm_start) >> PAGE_SHIFT; page_offset = (vmf->address - area->vm_start) >> PAGE_SHIFT;
page_offset += obj_offset;
trace_i915_gem_object_fault(obj, page_offset, true, write); trace_i915_gem_object_fault(obj, page_offset, true, write);
@ -402,12 +441,14 @@ retry:
if (ret) if (ret)
goto err_unpin; goto err_unpin;
set_address_limits(area, vma, obj_offset, &start, &end);
pfn = (ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT;
pfn += (start - area->vm_start) >> PAGE_SHIFT;
pfn += obj_offset - vma->gtt_view.partial.offset;
/* Finally, remap it using the new GTT offset */ /* Finally, remap it using the new GTT offset */
ret = remap_io_mapping(area, ret = remap_io_mapping(area, start, pfn, end - start, &ggtt->iomap);
area->vm_start + (vma->gtt_view.partial.offset << PAGE_SHIFT),
(ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT,
min_t(u64, vma->size, area->vm_end - area->vm_start),
&ggtt->iomap);
if (ret) if (ret)
goto err_fence; goto err_fence;
@ -1084,6 +1125,8 @@ int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma
mmo = mmap_offset_attach(obj, mmap_type, NULL); mmo = mmap_offset_attach(obj, mmap_type, NULL);
if (IS_ERR(mmo)) if (IS_ERR(mmo))
return PTR_ERR(mmo); return PTR_ERR(mmo);
vma->vm_pgoff += drm_vma_node_start(&mmo->vma_node);
} }
/* /*

View File

@ -165,7 +165,6 @@ i915_ttm_placement_from_obj(const struct drm_i915_gem_object *obj,
i915_ttm_place_from_region(num_allowed ? obj->mm.placements[0] : i915_ttm_place_from_region(num_allowed ? obj->mm.placements[0] :
obj->mm.region, &places[0], obj->bo_offset, obj->mm.region, &places[0], obj->bo_offset,
obj->base.size, flags); obj->base.size, flags);
places[0].flags |= TTM_PL_FLAG_DESIRED;
/* Cache this on object? */ /* Cache this on object? */
for (i = 0; i < num_allowed; ++i) { for (i = 0; i < num_allowed; ++i) {
@ -779,13 +778,16 @@ static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
.interruptible = true, .interruptible = true,
.no_wait_gpu = false, .no_wait_gpu = false,
}; };
int real_num_busy; struct ttm_placement initial_placement;
struct ttm_place initial_place;
int ret; int ret;
/* First try only the requested placement. No eviction. */ /* First try only the requested placement. No eviction. */
real_num_busy = placement->num_placement; initial_placement.num_placement = 1;
placement->num_placement = 1; memcpy(&initial_place, placement->placement, sizeof(struct ttm_place));
ret = ttm_bo_validate(bo, placement, &ctx); initial_place.flags |= TTM_PL_FLAG_DESIRED;
initial_placement.placement = &initial_place;
ret = ttm_bo_validate(bo, &initial_placement, &ctx);
if (ret) { if (ret) {
ret = i915_ttm_err_to_gem(ret); ret = i915_ttm_err_to_gem(ret);
/* /*
@ -800,7 +802,6 @@ static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
* If the initial attempt fails, allow all accepted placements, * If the initial attempt fails, allow all accepted placements,
* evicting if necessary. * evicting if necessary.
*/ */
placement->num_placement = real_num_busy;
ret = ttm_bo_validate(bo, placement, &ctx); ret = ttm_bo_validate(bo, placement, &ctx);
if (ret) if (ret)
return i915_ttm_err_to_gem(ret); return i915_ttm_err_to_gem(ret);

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
config DRM_OMAP config DRM_OMAP
tristate "OMAP DRM" tristate "OMAP DRM"
depends on MMU
depends on DRM && OF depends on DRM && OF
depends on ARCH_OMAP2PLUS || (COMPILE_TEST && PAGE_SIZE_LESS_THAN_64KB) depends on ARCH_OMAP2PLUS || (COMPILE_TEST && PAGE_SIZE_LESS_THAN_64KB)
select DRM_KMS_HELPER select DRM_KMS_HELPER

View File

@ -102,6 +102,17 @@ static void drm_gem_shmem_test_obj_create_private(struct kunit *test)
sg_init_one(sgt->sgl, buf, TEST_SIZE); sg_init_one(sgt->sgl, buf, TEST_SIZE);
/*
* Set the DMA mask to 64-bits and map the sgtables
* otherwise drm_gem_shmem_free will cause a warning
* on debug kernels.
*/
ret = dma_set_mask(drm_dev->dev, DMA_BIT_MASK(64));
KUNIT_ASSERT_EQ(test, ret, 0);
ret = dma_map_sgtable(drm_dev->dev, sgt, DMA_BIDIRECTIONAL, 0);
KUNIT_ASSERT_EQ(test, ret, 0);
/* Init a mock DMA-BUF */ /* Init a mock DMA-BUF */
buf_mock.size = TEST_SIZE; buf_mock.size = TEST_SIZE;
attach_mock.dmabuf = &buf_mock; attach_mock.dmabuf = &buf_mock;

View File

@ -203,9 +203,10 @@ static int xe_hwmon_power_max_write(struct xe_hwmon *hwmon, int channel, long va
reg_val = xe_mmio_rmw32(hwmon->gt, rapl_limit, PKG_PWR_LIM_1_EN, 0); reg_val = xe_mmio_rmw32(hwmon->gt, rapl_limit, PKG_PWR_LIM_1_EN, 0);
reg_val = xe_mmio_read32(hwmon->gt, rapl_limit); reg_val = xe_mmio_read32(hwmon->gt, rapl_limit);
if (reg_val & PKG_PWR_LIM_1_EN) { if (reg_val & PKG_PWR_LIM_1_EN) {
drm_warn(&gt_to_xe(hwmon->gt)->drm, "PL1 disable is not supported!\n");
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
goto unlock;
} }
goto unlock;
} }
/* Computation in 64-bits to avoid overflow. Round to nearest. */ /* Computation in 64-bits to avoid overflow. Round to nearest. */

View File

@ -1634,6 +1634,9 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
if (!snapshot) if (!snapshot)
return NULL; return NULL;
if (lrc->bo && lrc->bo->vm)
xe_vm_get(lrc->bo->vm);
snapshot->context_desc = xe_lrc_ggtt_addr(lrc); snapshot->context_desc = xe_lrc_ggtt_addr(lrc);
snapshot->indirect_context_desc = xe_lrc_indirect_ring_ggtt_addr(lrc); snapshot->indirect_context_desc = xe_lrc_indirect_ring_ggtt_addr(lrc);
snapshot->head = xe_lrc_ring_head(lrc); snapshot->head = xe_lrc_ring_head(lrc);
@ -1653,12 +1656,14 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
void xe_lrc_snapshot_capture_delayed(struct xe_lrc_snapshot *snapshot) void xe_lrc_snapshot_capture_delayed(struct xe_lrc_snapshot *snapshot)
{ {
struct xe_bo *bo; struct xe_bo *bo;
struct xe_vm *vm;
struct iosys_map src; struct iosys_map src;
if (!snapshot) if (!snapshot)
return; return;
bo = snapshot->lrc_bo; bo = snapshot->lrc_bo;
vm = bo->vm;
snapshot->lrc_bo = NULL; snapshot->lrc_bo = NULL;
snapshot->lrc_snapshot = kvmalloc(snapshot->lrc_size, GFP_KERNEL); snapshot->lrc_snapshot = kvmalloc(snapshot->lrc_size, GFP_KERNEL);
@ -1678,6 +1683,8 @@ void xe_lrc_snapshot_capture_delayed(struct xe_lrc_snapshot *snapshot)
xe_bo_unlock(bo); xe_bo_unlock(bo);
put_bo: put_bo:
xe_bo_put(bo); xe_bo_put(bo);
if (vm)
xe_vm_put(vm);
} }
void xe_lrc_snapshot_print(struct xe_lrc_snapshot *snapshot, struct drm_printer *p) void xe_lrc_snapshot_print(struct xe_lrc_snapshot *snapshot, struct drm_printer *p)
@ -1727,8 +1734,14 @@ void xe_lrc_snapshot_free(struct xe_lrc_snapshot *snapshot)
return; return;
kvfree(snapshot->lrc_snapshot); kvfree(snapshot->lrc_snapshot);
if (snapshot->lrc_bo) if (snapshot->lrc_bo) {
struct xe_vm *vm;
vm = snapshot->lrc_bo->vm;
xe_bo_put(snapshot->lrc_bo); xe_bo_put(snapshot->lrc_bo);
if (vm)
xe_vm_put(vm);
}
kfree(snapshot); kfree(snapshot);
} }

View File

@ -231,7 +231,7 @@ static void rtp_mark_active(struct xe_device *xe,
if (first == last) if (first == last)
bitmap_set(ctx->active_entries, first, 1); bitmap_set(ctx->active_entries, first, 1);
else else
bitmap_set(ctx->active_entries, first, last - first + 2); bitmap_set(ctx->active_entries, first, last - first + 1);
} }
/** /**

View File

@ -263,7 +263,7 @@ void xe_sync_entry_cleanup(struct xe_sync_entry *sync)
if (sync->fence) if (sync->fence)
dma_fence_put(sync->fence); dma_fence_put(sync->fence);
if (sync->chain_fence) if (sync->chain_fence)
dma_fence_put(&sync->chain_fence->base); dma_fence_chain_free(sync->chain_fence);
if (sync->ufence) if (sync->ufence)
user_fence_put(sync->ufence); user_fence_put(sync->ufence);
} }

View File

@ -150,7 +150,7 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
} while (remaining_size); } while (remaining_size);
if (place->flags & TTM_PL_FLAG_CONTIGUOUS) { if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
if (!drm_buddy_block_trim(mm, vres->base.size, &vres->blocks)) if (!drm_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks))
size = vres->base.size; size = vres->base.size;
} }

View File

@ -990,8 +990,11 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
return ret; return ret;
ret = geni_se_resources_on(&gi2c->se); ret = geni_se_resources_on(&gi2c->se);
if (ret) if (ret) {
clk_disable_unprepare(gi2c->core_clk);
geni_icc_disable(&gi2c->se);
return ret; return ret;
}
enable_irq(gi2c->irq); enable_irq(gi2c->irq);
gi2c->suspended = 0; gi2c->suspended = 0;

View File

@ -18,7 +18,7 @@
enum testunit_cmds { enum testunit_cmds {
TU_CMD_READ_BYTES = 1, /* save 0 for ABORT, RESET or similar */ TU_CMD_READ_BYTES = 1, /* save 0 for ABORT, RESET or similar */
TU_CMD_HOST_NOTIFY, TU_CMD_SMBUS_HOST_NOTIFY,
TU_CMD_SMBUS_BLOCK_PROC_CALL, TU_CMD_SMBUS_BLOCK_PROC_CALL,
TU_NUM_CMDS TU_NUM_CMDS
}; };
@ -60,7 +60,7 @@ static void i2c_slave_testunit_work(struct work_struct *work)
msg.len = tu->regs[TU_REG_DATAH]; msg.len = tu->regs[TU_REG_DATAH];
break; break;
case TU_CMD_HOST_NOTIFY: case TU_CMD_SMBUS_HOST_NOTIFY:
msg.addr = 0x08; msg.addr = 0x08;
msg.flags = 0; msg.flags = 0;
msg.len = 3; msg.len = 3;

View File

@ -34,6 +34,7 @@ static int smbus_do_alert(struct device *dev, void *addrp)
struct i2c_client *client = i2c_verify_client(dev); struct i2c_client *client = i2c_verify_client(dev);
struct alert_data *data = addrp; struct alert_data *data = addrp;
struct i2c_driver *driver; struct i2c_driver *driver;
int ret;
if (!client || client->addr != data->addr) if (!client || client->addr != data->addr)
return 0; return 0;
@ -47,16 +48,47 @@ static int smbus_do_alert(struct device *dev, void *addrp)
device_lock(dev); device_lock(dev);
if (client->dev.driver) { if (client->dev.driver) {
driver = to_i2c_driver(client->dev.driver); driver = to_i2c_driver(client->dev.driver);
if (driver->alert) if (driver->alert) {
/* Stop iterating after we find the device */
driver->alert(client, data->type, data->data); driver->alert(client, data->type, data->data);
else ret = -EBUSY;
} else {
dev_warn(&client->dev, "no driver alert()!\n"); dev_warn(&client->dev, "no driver alert()!\n");
} else ret = -EOPNOTSUPP;
}
} else {
dev_dbg(&client->dev, "alert with no driver\n"); dev_dbg(&client->dev, "alert with no driver\n");
ret = -ENODEV;
}
device_unlock(dev); device_unlock(dev);
/* Stop iterating after we find the device */ return ret;
return -EBUSY; }
/* Same as above, but call back all drivers with alert handler */
static int smbus_do_alert_force(struct device *dev, void *addrp)
{
struct i2c_client *client = i2c_verify_client(dev);
struct alert_data *data = addrp;
struct i2c_driver *driver;
if (!client || (client->flags & I2C_CLIENT_TEN))
return 0;
/*
* Drivers should either disable alerts, or provide at least
* a minimal handler. Lock so the driver won't change.
*/
device_lock(dev);
if (client->dev.driver) {
driver = to_i2c_driver(client->dev.driver);
if (driver->alert)
driver->alert(client, data->type, data->data);
}
device_unlock(dev);
return 0;
} }
/* /*
@ -67,6 +99,7 @@ static irqreturn_t smbus_alert(int irq, void *d)
{ {
struct i2c_smbus_alert *alert = d; struct i2c_smbus_alert *alert = d;
struct i2c_client *ara; struct i2c_client *ara;
unsigned short prev_addr = I2C_CLIENT_END; /* Not a valid address */
ara = alert->ara; ara = alert->ara;
@ -94,8 +127,25 @@ static irqreturn_t smbus_alert(int irq, void *d)
data.addr, data.data); data.addr, data.data);
/* Notify driver for the device which issued the alert */ /* Notify driver for the device which issued the alert */
device_for_each_child(&ara->adapter->dev, &data, status = device_for_each_child(&ara->adapter->dev, &data,
smbus_do_alert); smbus_do_alert);
/*
* If we read the same address more than once, and the alert
* was not handled by a driver, it won't do any good to repeat
* the loop because it will never terminate. Try again, this
* time calling the alert handlers of all devices connected to
* the bus, and abort the loop afterwards. If this helps, we
* are all set. If it doesn't, there is nothing else we can do,
* so we might as well abort the loop.
* Note: This assumes that a driver with alert handler handles
* the alert properly and clears it if necessary.
*/
if (data.addr == prev_addr && status != -EBUSY) {
device_for_each_child(&ara->adapter->dev, &data,
smbus_do_alert_force);
break;
}
prev_addr = data.addr;
} }
return IRQ_HANDLED; return IRQ_HANDLED;

View File

@ -32,15 +32,10 @@ static void aplic_msi_irq_unmask(struct irq_data *d)
aplic_irq_unmask(d); aplic_irq_unmask(d);
} }
static void aplic_msi_irq_eoi(struct irq_data *d) static void aplic_msi_irq_retrigger_level(struct irq_data *d)
{ {
struct aplic_priv *priv = irq_data_get_irq_chip_data(d); struct aplic_priv *priv = irq_data_get_irq_chip_data(d);
/*
* EOI handling is required only for level-triggered interrupts
* when APLIC is in MSI mode.
*/
switch (irqd_get_trigger_type(d)) { switch (irqd_get_trigger_type(d)) {
case IRQ_TYPE_LEVEL_LOW: case IRQ_TYPE_LEVEL_LOW:
case IRQ_TYPE_LEVEL_HIGH: case IRQ_TYPE_LEVEL_HIGH:
@ -59,6 +54,29 @@ static void aplic_msi_irq_eoi(struct irq_data *d)
} }
} }
static void aplic_msi_irq_eoi(struct irq_data *d)
{
/*
* EOI handling is required only for level-triggered interrupts
* when APLIC is in MSI mode.
*/
aplic_msi_irq_retrigger_level(d);
}
static int aplic_msi_irq_set_type(struct irq_data *d, unsigned int type)
{
int rc = aplic_irq_set_type(d, type);
if (rc)
return rc;
/*
* Updating sourcecfg register for level-triggered interrupts
* requires interrupt retriggering when APLIC is in MSI mode.
*/
aplic_msi_irq_retrigger_level(d);
return 0;
}
static void aplic_msi_write_msg(struct irq_data *d, struct msi_msg *msg) static void aplic_msi_write_msg(struct irq_data *d, struct msi_msg *msg)
{ {
unsigned int group_index, hart_index, guest_index, val; unsigned int group_index, hart_index, guest_index, val;
@ -130,7 +148,7 @@ static const struct msi_domain_template aplic_msi_template = {
.name = "APLIC-MSI", .name = "APLIC-MSI",
.irq_mask = aplic_msi_irq_mask, .irq_mask = aplic_msi_irq_mask,
.irq_unmask = aplic_msi_irq_unmask, .irq_unmask = aplic_msi_irq_unmask,
.irq_set_type = aplic_irq_set_type, .irq_set_type = aplic_msi_irq_set_type,
.irq_eoi = aplic_msi_irq_eoi, .irq_eoi = aplic_msi_irq_eoi,
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
.irq_set_affinity = irq_chip_set_affinity_parent, .irq_set_affinity = irq_chip_set_affinity_parent,

View File

@ -189,7 +189,7 @@ static int __init xilinx_intc_of_init(struct device_node *intc,
irqc->intr_mask = 0; irqc->intr_mask = 0;
} }
if (irqc->intr_mask >> irqc->nr_irq) if ((u64)irqc->intr_mask >> irqc->nr_irq)
pr_warn("irq-xilinx: mismatch in kind-of-intr param\n"); pr_warn("irq-xilinx: mismatch in kind-of-intr param\n");
pr_info("irq-xilinx: %pOF: num_irq=%d, edge=0x%x\n", pr_info("irq-xilinx: %pOF: num_irq=%d, edge=0x%x\n",

View File

@ -587,7 +587,7 @@ config NSM
config MARVELL_CN10K_DPI config MARVELL_CN10K_DPI
tristate "Octeon CN10K DPI driver" tristate "Octeon CN10K DPI driver"
depends on PCI depends on PCI && PCI_IOV
depends on ARCH_THUNDER || (COMPILE_TEST && 64BIT) depends on ARCH_THUNDER || (COMPILE_TEST && 64BIT)
help help
Enables Octeon CN10K DMA packet interface (DPI) driver which Enables Octeon CN10K DMA packet interface (DPI) driver which

View File

@ -233,6 +233,49 @@ static void ee1004_cleanup_bus_data(void *data)
mutex_unlock(&ee1004_bus_lock); mutex_unlock(&ee1004_bus_lock);
} }
static int ee1004_init_bus_data(struct i2c_client *client)
{
struct ee1004_bus_data *bd;
int err, cnr = 0;
bd = ee1004_get_bus_data(client->adapter);
if (!bd)
return dev_err_probe(&client->dev, -ENOSPC, "Only %d busses supported",
EE1004_MAX_BUSSES);
i2c_set_clientdata(client, bd);
if (++bd->dev_count == 1) {
/* Use 2 dummy devices for page select command */
for (cnr = 0; cnr < EE1004_NUM_PAGES; cnr++) {
struct i2c_client *cl;
cl = i2c_new_dummy_device(client->adapter, EE1004_ADDR_SET_PAGE + cnr);
if (IS_ERR(cl)) {
err = PTR_ERR(cl);
goto err_out;
}
bd->set_page[cnr] = cl;
}
/* Remember current page to avoid unneeded page select */
err = ee1004_get_current_page(bd);
if (err < 0)
goto err_out;
dev_dbg(&client->dev, "Currently selected page: %d\n", err);
bd->current_page = err;
}
return 0;
err_out:
ee1004_cleanup(cnr, bd);
return err;
}
static int ee1004_probe(struct i2c_client *client) static int ee1004_probe(struct i2c_client *client)
{ {
struct nvmem_config config = { struct nvmem_config config = {
@ -251,9 +294,8 @@ static int ee1004_probe(struct i2c_client *client)
.compat = true, .compat = true,
.base_dev = &client->dev, .base_dev = &client->dev,
}; };
struct ee1004_bus_data *bd;
struct nvmem_device *ndev; struct nvmem_device *ndev;
int err, cnr = 0; int err;
/* Make sure we can operate on this adapter */ /* Make sure we can operate on this adapter */
if (!i2c_check_functionality(client->adapter, if (!i2c_check_functionality(client->adapter,
@ -264,46 +306,21 @@ static int ee1004_probe(struct i2c_client *client)
mutex_lock(&ee1004_bus_lock); mutex_lock(&ee1004_bus_lock);
bd = ee1004_get_bus_data(client->adapter); err = ee1004_init_bus_data(client);
if (!bd) {
mutex_unlock(&ee1004_bus_lock);
return dev_err_probe(&client->dev, -ENOSPC,
"Only %d busses supported", EE1004_MAX_BUSSES);
}
err = devm_add_action_or_reset(&client->dev, ee1004_cleanup_bus_data, bd);
if (err < 0)
return err;
i2c_set_clientdata(client, bd);
if (++bd->dev_count == 1) {
/* Use 2 dummy devices for page select command */
for (cnr = 0; cnr < EE1004_NUM_PAGES; cnr++) {
struct i2c_client *cl;
cl = i2c_new_dummy_device(client->adapter, EE1004_ADDR_SET_PAGE + cnr);
if (IS_ERR(cl)) {
mutex_unlock(&ee1004_bus_lock);
return PTR_ERR(cl);
}
bd->set_page[cnr] = cl;
}
/* Remember current page to avoid unneeded page select */
err = ee1004_get_current_page(bd);
if (err < 0) { if (err < 0) {
mutex_unlock(&ee1004_bus_lock); mutex_unlock(&ee1004_bus_lock);
return err; return err;
} }
dev_dbg(&client->dev, "Currently selected page: %d\n", err);
bd->current_page = err;
}
ee1004_probe_temp_sensor(client); ee1004_probe_temp_sensor(client);
mutex_unlock(&ee1004_bus_lock); mutex_unlock(&ee1004_bus_lock);
err = devm_add_action_or_reset(&client->dev, ee1004_cleanup_bus_data,
i2c_get_clientdata(client));
if (err < 0)
return err;
ndev = devm_nvmem_register(&client->dev, &config); ndev = devm_nvmem_register(&client->dev, &config);
if (IS_ERR(ndev)) if (IS_ERR(ndev))
return PTR_ERR(ndev); return PTR_ERR(ndev);

View File

@ -675,8 +675,10 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
of_remove_property(child, prop); of_remove_property(child, prop);
phydev = of_phy_find_device(child); phydev = of_phy_find_device(child);
if (phydev) if (phydev) {
phy_device_remove(phydev); phy_device_remove(phydev);
phy_device_free(phydev);
}
} }
err = mdiobus_register(priv->user_mii_bus); err = mdiobus_register(priv->user_mii_bus);

View File

@ -2578,7 +2578,11 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
if (!port) if (!port)
return MICREL_KSZ8_P1_ERRATA; return MICREL_KSZ8_P1_ERRATA;
break; break;
case KSZ8567_CHIP_ID:
case KSZ9477_CHIP_ID: case KSZ9477_CHIP_ID:
case KSZ9567_CHIP_ID:
case KSZ9896_CHIP_ID:
case KSZ9897_CHIP_ID:
/* KSZ9477 Errata DS80000754C /* KSZ9477 Errata DS80000754C
* *
* Module 4: Energy Efficient Ethernet (EEE) feature select must * Module 4: Energy Efficient Ethernet (EEE) feature select must
@ -2588,6 +2592,13 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
* controls. If not disabled, the PHY ports can auto-negotiate * controls. If not disabled, the PHY ports can auto-negotiate
* to enable EEE, and this feature can cause link drops when * to enable EEE, and this feature can cause link drops when
* linked to another device supporting EEE. * linked to another device supporting EEE.
*
* The same item appears in the errata for the KSZ9567, KSZ9896,
* and KSZ9897.
*
* A similar item appears in the errata for the KSZ8567, but
* provides an alternative workaround. For now, use the simple
* workaround of disabling the EEE feature for this device too.
*/ */
return MICREL_NO_EEE; return MICREL_NO_EEE;
} }
@ -3764,6 +3775,11 @@ static int ksz_port_set_mac_address(struct dsa_switch *ds, int port,
return -EBUSY; return -EBUSY;
} }
/* Need to initialize variable as the code to fill in settings may
* not be executed.
*/
wol.wolopts = 0;
ksz_get_wol(ds, dp->index, &wol); ksz_get_wol(ds, dp->index, &wol);
if (wol.wolopts & WAKE_MAGIC) { if (wol.wolopts & WAKE_MAGIC) {
dev_err(ds->dev, dev_err(ds->dev,

View File

@ -7591,19 +7591,20 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp)
int rx = bp->rx_nr_rings, stat; int rx = bp->rx_nr_rings, stat;
int vnic, grp = rx; int vnic, grp = rx;
if (hw_resc->resv_tx_rings != bp->tx_nr_rings &&
bp->hwrm_spec_code >= 0x10601)
return true;
/* Old firmware does not need RX ring reservations but we still /* Old firmware does not need RX ring reservations but we still
* need to setup a default RSS map when needed. With new firmware * need to setup a default RSS map when needed. With new firmware
* we go through RX ring reservations first and then set up the * we go through RX ring reservations first and then set up the
* RSS map for the successfully reserved RX rings when needed. * RSS map for the successfully reserved RX rings when needed.
*/ */
if (!BNXT_NEW_RM(bp)) { if (!BNXT_NEW_RM(bp))
bnxt_check_rss_tbl_no_rmgr(bp); bnxt_check_rss_tbl_no_rmgr(bp);
if (hw_resc->resv_tx_rings != bp->tx_nr_rings &&
bp->hwrm_spec_code >= 0x10601)
return true;
if (!BNXT_NEW_RM(bp))
return false; return false;
}
vnic = bnxt_get_total_vnics(bp, rx); vnic = bnxt_get_total_vnics(bp, rx);

View File

@ -5290,7 +5290,7 @@ void bnxt_ethtool_free(struct bnxt *bp)
const struct ethtool_ops bnxt_ethtool_ops = { const struct ethtool_ops bnxt_ethtool_ops = {
.cap_link_lanes_supported = 1, .cap_link_lanes_supported = 1,
.cap_rss_ctx_supported = 1, .cap_rss_ctx_supported = 1,
.rxfh_max_context_id = BNXT_MAX_ETH_RSS_CTX, .rxfh_max_num_contexts = BNXT_MAX_ETH_RSS_CTX + 1,
.rxfh_indir_space = BNXT_MAX_RSS_TABLE_ENTRIES_P5, .rxfh_indir_space = BNXT_MAX_RSS_TABLE_ENTRIES_P5,
.rxfh_priv_size = sizeof(struct bnxt_rss_ctx), .rxfh_priv_size = sizeof(struct bnxt_rss_ctx),
.supported_coalesce_params = ETHTOOL_COALESCE_USECS | .supported_coalesce_params = ETHTOOL_COALESCE_USECS |

View File

@ -42,19 +42,15 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
struct bcmgenet_priv *priv = netdev_priv(dev); struct bcmgenet_priv *priv = netdev_priv(dev);
struct device *kdev = &priv->pdev->dev; struct device *kdev = &priv->pdev->dev;
if (dev->phydev) { if (dev->phydev)
phy_ethtool_get_wol(dev->phydev, wol); phy_ethtool_get_wol(dev->phydev, wol);
if (wol->supported)
return;
}
if (!device_can_wakeup(kdev)) { /* MAC is not wake-up capable, return what the PHY does */
wol->supported = 0; if (!device_can_wakeup(kdev))
wol->wolopts = 0;
return; return;
}
wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER; /* Overlay MAC capabilities with that of the PHY queried before */
wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
wol->wolopts = priv->wolopts; wol->wolopts = priv->wolopts;
memset(wol->sopass, 0, sizeof(wol->sopass)); memset(wol->sopass, 0, sizeof(wol->sopass));

View File

@ -775,6 +775,9 @@ void fec_ptp_stop(struct platform_device *pdev)
struct net_device *ndev = platform_get_drvdata(pdev); struct net_device *ndev = platform_get_drvdata(pdev);
struct fec_enet_private *fep = netdev_priv(ndev); struct fec_enet_private *fep = netdev_priv(ndev);
if (fep->pps_enable)
fec_ptp_enable_pps(fep, 0);
cancel_delayed_work_sync(&fep->time_keep); cancel_delayed_work_sync(&fep->time_keep);
hrtimer_cancel(&fep->perout_timer); hrtimer_cancel(&fep->perout_timer);
if (fep->ptp_clock) if (fep->ptp_clock)

View File

@ -495,7 +495,7 @@ static int gve_set_channels(struct net_device *netdev,
return -EINVAL; return -EINVAL;
} }
if (!netif_carrier_ok(netdev)) { if (!netif_running(netdev)) {
priv->tx_cfg.num_queues = new_tx; priv->tx_cfg.num_queues = new_tx;
priv->rx_cfg.num_queues = new_rx; priv->rx_cfg.num_queues = new_rx;
return 0; return 0;

View File

@ -1566,7 +1566,7 @@ static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
u32 status; u32 status;
old_prog = READ_ONCE(priv->xdp_prog); old_prog = READ_ONCE(priv->xdp_prog);
if (!netif_carrier_ok(priv->dev)) { if (!netif_running(priv->dev)) {
WRITE_ONCE(priv->xdp_prog, prog); WRITE_ONCE(priv->xdp_prog, prog);
if (old_prog) if (old_prog)
bpf_prog_put(old_prog); bpf_prog_put(old_prog);
@ -1847,7 +1847,7 @@ int gve_adjust_queues(struct gve_priv *priv,
rx_alloc_cfg.qcfg = &new_rx_config; rx_alloc_cfg.qcfg = &new_rx_config;
tx_alloc_cfg.num_rings = new_tx_config.num_queues; tx_alloc_cfg.num_rings = new_tx_config.num_queues;
if (netif_carrier_ok(priv->dev)) { if (netif_running(priv->dev)) {
err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
return err; return err;
} }
@ -2064,7 +2064,7 @@ static int gve_set_features(struct net_device *netdev,
if ((netdev->features & NETIF_F_LRO) != (features & NETIF_F_LRO)) { if ((netdev->features & NETIF_F_LRO) != (features & NETIF_F_LRO)) {
netdev->features ^= NETIF_F_LRO; netdev->features ^= NETIF_F_LRO;
if (netif_carrier_ok(netdev)) { if (netif_running(netdev)) {
err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
if (err) if (err)
goto revert_features; goto revert_features;
@ -2359,7 +2359,7 @@ err:
int gve_reset(struct gve_priv *priv, bool attempt_teardown) int gve_reset(struct gve_priv *priv, bool attempt_teardown)
{ {
bool was_up = netif_carrier_ok(priv->dev); bool was_up = netif_running(priv->dev);
int err; int err;
dev_info(&priv->pdev->dev, "Performing reset\n"); dev_info(&priv->pdev->dev, "Performing reset\n");
@ -2700,7 +2700,7 @@ static void gve_shutdown(struct pci_dev *pdev)
{ {
struct net_device *netdev = pci_get_drvdata(pdev); struct net_device *netdev = pci_get_drvdata(pdev);
struct gve_priv *priv = netdev_priv(netdev); struct gve_priv *priv = netdev_priv(netdev);
bool was_up = netif_carrier_ok(priv->dev); bool was_up = netif_running(priv->dev);
rtnl_lock(); rtnl_lock();
if (was_up && gve_close(priv->dev)) { if (was_up && gve_close(priv->dev)) {
@ -2718,7 +2718,7 @@ static int gve_suspend(struct pci_dev *pdev, pm_message_t state)
{ {
struct net_device *netdev = pci_get_drvdata(pdev); struct net_device *netdev = pci_get_drvdata(pdev);
struct gve_priv *priv = netdev_priv(netdev); struct gve_priv *priv = netdev_priv(netdev);
bool was_up = netif_carrier_ok(priv->dev); bool was_up = netif_running(priv->dev);
priv->suspend_cnt++; priv->suspend_cnt++;
rtnl_lock(); rtnl_lock();

View File

@ -4673,9 +4673,9 @@ static int ice_get_port_fec_stats(struct ice_hw *hw, u16 pcs_quad, u16 pcs_port,
if (err) if (err)
return err; return err;
fec_stats->uncorrectable_blocks.total = (fec_corr_high_val << 16) + fec_stats->corrected_blocks.total = (fec_corr_high_val << 16) +
fec_corr_low_val; fec_corr_low_val;
fec_stats->corrected_blocks.total = (fec_uncorr_high_val << 16) + fec_stats->uncorrectable_blocks.total = (fec_uncorr_high_val << 16) +
fec_uncorr_low_val; fec_uncorr_low_val;
return 0; return 0;
} }

View File

@ -559,6 +559,8 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
if (test_bit(ICE_PREPARED_FOR_RESET, pf->state)) if (test_bit(ICE_PREPARED_FOR_RESET, pf->state))
return; return;
synchronize_irq(pf->oicr_irq.virq);
ice_unplug_aux_dev(pf); ice_unplug_aux_dev(pf);
/* Notify VFs of impending reset */ /* Notify VFs of impending reset */

View File

@ -1477,6 +1477,10 @@ void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
/* Update cached link status for this port immediately */ /* Update cached link status for this port immediately */
ptp_port->link_up = linkup; ptp_port->link_up = linkup;
/* Skip HW writes if reset is in progress */
if (pf->hw.reset_ongoing)
return;
switch (hw->ptp.phy_model) { switch (hw->ptp.phy_model) {
case ICE_PHY_E810: case ICE_PHY_E810:
/* Do not reconfigure E810 PHY */ /* Do not reconfigure E810 PHY */

View File

@ -900,8 +900,8 @@ static void idpf_vport_stop(struct idpf_vport *vport)
vport->link_up = false; vport->link_up = false;
idpf_vport_intr_deinit(vport); idpf_vport_intr_deinit(vport);
idpf_vport_intr_rel(vport);
idpf_vport_queues_rel(vport); idpf_vport_queues_rel(vport);
idpf_vport_intr_rel(vport);
np->state = __IDPF_VPORT_DOWN; np->state = __IDPF_VPORT_DOWN;
} }
@ -1335,9 +1335,8 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
/** /**
* idpf_vport_open - Bring up a vport * idpf_vport_open - Bring up a vport
* @vport: vport to bring up * @vport: vport to bring up
* @alloc_res: allocate queue resources
*/ */
static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res) static int idpf_vport_open(struct idpf_vport *vport)
{ {
struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
struct idpf_adapter *adapter = vport->adapter; struct idpf_adapter *adapter = vport->adapter;
@ -1350,45 +1349,43 @@ static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res)
/* we do not allow interface up just yet */ /* we do not allow interface up just yet */
netif_carrier_off(vport->netdev); netif_carrier_off(vport->netdev);
if (alloc_res) {
err = idpf_vport_queues_alloc(vport);
if (err)
return err;
}
err = idpf_vport_intr_alloc(vport); err = idpf_vport_intr_alloc(vport);
if (err) { if (err) {
dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n", dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n",
vport->vport_id, err); vport->vport_id, err);
goto queues_rel; return err;
} }
err = idpf_vport_queues_alloc(vport);
if (err)
goto intr_rel;
err = idpf_vport_queue_ids_init(vport); err = idpf_vport_queue_ids_init(vport);
if (err) { if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n", dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n",
vport->vport_id, err); vport->vport_id, err);
goto intr_rel; goto queues_rel;
} }
err = idpf_vport_intr_init(vport); err = idpf_vport_intr_init(vport);
if (err) { if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n", dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n",
vport->vport_id, err); vport->vport_id, err);
goto intr_rel; goto queues_rel;
} }
err = idpf_rx_bufs_init_all(vport); err = idpf_rx_bufs_init_all(vport);
if (err) { if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n", dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n",
vport->vport_id, err); vport->vport_id, err);
goto intr_rel; goto queues_rel;
} }
err = idpf_queue_reg_init(vport); err = idpf_queue_reg_init(vport);
if (err) { if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n", dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",
vport->vport_id, err); vport->vport_id, err);
goto intr_rel; goto queues_rel;
} }
idpf_rx_init_buf_tail(vport); idpf_rx_init_buf_tail(vport);
@ -1455,10 +1452,10 @@ unmap_queue_vectors:
idpf_send_map_unmap_queue_vector_msg(vport, false); idpf_send_map_unmap_queue_vector_msg(vport, false);
intr_deinit: intr_deinit:
idpf_vport_intr_deinit(vport); idpf_vport_intr_deinit(vport);
intr_rel:
idpf_vport_intr_rel(vport);
queues_rel: queues_rel:
idpf_vport_queues_rel(vport); idpf_vport_queues_rel(vport);
intr_rel:
idpf_vport_intr_rel(vport);
return err; return err;
} }
@ -1539,7 +1536,7 @@ void idpf_init_task(struct work_struct *work)
np = netdev_priv(vport->netdev); np = netdev_priv(vport->netdev);
np->state = __IDPF_VPORT_DOWN; np->state = __IDPF_VPORT_DOWN;
if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags)) if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags))
idpf_vport_open(vport, true); idpf_vport_open(vport);
/* Spawn and return 'idpf_init_task' work queue until all the /* Spawn and return 'idpf_init_task' work queue until all the
* default vports are created * default vports are created
@ -1898,9 +1895,6 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
goto free_vport; goto free_vport;
} }
err = idpf_vport_queues_alloc(new_vport);
if (err)
goto free_vport;
if (current_state <= __IDPF_VPORT_DOWN) { if (current_state <= __IDPF_VPORT_DOWN) {
idpf_send_delete_queues_msg(vport); idpf_send_delete_queues_msg(vport);
} else { } else {
@ -1932,17 +1926,23 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
err = idpf_set_real_num_queues(vport); err = idpf_set_real_num_queues(vport);
if (err) if (err)
goto err_reset; goto err_open;
if (current_state == __IDPF_VPORT_UP) if (current_state == __IDPF_VPORT_UP)
err = idpf_vport_open(vport, false); err = idpf_vport_open(vport);
kfree(new_vport); kfree(new_vport);
return err; return err;
err_reset: err_reset:
idpf_vport_queues_rel(new_vport); idpf_send_add_queues_msg(vport, vport->num_txq, vport->num_complq,
vport->num_rxq, vport->num_bufq);
err_open:
if (current_state == __IDPF_VPORT_UP)
idpf_vport_open(vport);
free_vport: free_vport:
kfree(new_vport); kfree(new_vport);
@ -2171,7 +2171,7 @@ static int idpf_open(struct net_device *netdev)
idpf_vport_ctrl_lock(netdev); idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev); vport = idpf_netdev_to_vport(netdev);
err = idpf_vport_open(vport, true); err = idpf_vport_open(vport);
idpf_vport_ctrl_unlock(netdev); idpf_vport_ctrl_unlock(netdev);

View File

@ -3576,9 +3576,7 @@ static void idpf_vport_intr_napi_dis_all(struct idpf_vport *vport)
*/ */
void idpf_vport_intr_rel(struct idpf_vport *vport) void idpf_vport_intr_rel(struct idpf_vport *vport)
{ {
int i, j, v_idx; for (u32 v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
kfree(q_vector->complq); kfree(q_vector->complq);
@ -3593,26 +3591,6 @@ void idpf_vport_intr_rel(struct idpf_vport *vport)
free_cpumask_var(q_vector->affinity_mask); free_cpumask_var(q_vector->affinity_mask);
} }
/* Clean up the mapping of queues to vectors */
for (i = 0; i < vport->num_rxq_grp; i++) {
struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
if (idpf_is_queue_model_split(vport->rxq_model))
for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++)
rx_qgrp->splitq.rxq_sets[j]->rxq.q_vector = NULL;
else
for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
rx_qgrp->singleq.rxqs[j]->q_vector = NULL;
}
if (idpf_is_queue_model_split(vport->txq_model))
for (i = 0; i < vport->num_txq_grp; i++)
vport->txq_grps[i].complq->q_vector = NULL;
else
for (i = 0; i < vport->num_txq_grp; i++)
for (j = 0; j < vport->txq_grps[i].num_txq; j++)
vport->txq_grps[i].txqs[j]->q_vector = NULL;
kfree(vport->q_vectors); kfree(vport->q_vectors);
vport->q_vectors = NULL; vport->q_vectors = NULL;
} }
@ -3780,13 +3758,15 @@ void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector)
/** /**
* idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport * idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport
* @vport: main vport structure * @vport: main vport structure
* @basename: name for the vector
*/ */
static int idpf_vport_intr_req_irq(struct idpf_vport *vport, char *basename) static int idpf_vport_intr_req_irq(struct idpf_vport *vport)
{ {
struct idpf_adapter *adapter = vport->adapter; struct idpf_adapter *adapter = vport->adapter;
const char *drv_name, *if_name, *vec_name;
int vector, err, irq_num, vidx; int vector, err, irq_num, vidx;
const char *vec_name;
drv_name = dev_driver_string(&adapter->pdev->dev);
if_name = netdev_name(vport->netdev);
for (vector = 0; vector < vport->num_q_vectors; vector++) { for (vector = 0; vector < vport->num_q_vectors; vector++) {
struct idpf_q_vector *q_vector = &vport->q_vectors[vector]; struct idpf_q_vector *q_vector = &vport->q_vectors[vector];
@ -3804,8 +3784,8 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport, char *basename)
else else
continue; continue;
name = kasprintf(GFP_KERNEL, "%s-%s-%d", basename, vec_name, name = kasprintf(GFP_KERNEL, "%s-%s-%s-%d", drv_name, if_name,
vidx); vec_name, vidx);
err = request_irq(irq_num, idpf_vport_intr_clean_queues, 0, err = request_irq(irq_num, idpf_vport_intr_clean_queues, 0,
name, q_vector); name, q_vector);
@ -4326,7 +4306,6 @@ error:
*/ */
int idpf_vport_intr_init(struct idpf_vport *vport) int idpf_vport_intr_init(struct idpf_vport *vport)
{ {
char *int_name;
int err; int err;
err = idpf_vport_intr_init_vec_idx(vport); err = idpf_vport_intr_init_vec_idx(vport);
@ -4340,11 +4319,7 @@ int idpf_vport_intr_init(struct idpf_vport *vport)
if (err) if (err)
goto unroll_vectors_alloc; goto unroll_vectors_alloc;
int_name = kasprintf(GFP_KERNEL, "%s-%s", err = idpf_vport_intr_req_irq(vport);
dev_driver_string(&vport->adapter->pdev->dev),
vport->netdev->name);
err = idpf_vport_intr_req_irq(vport, int_name);
if (err) if (err)
goto unroll_vectors_alloc; goto unroll_vectors_alloc;

Some files were not shown because too many files have changed in this diff Show More