Merge remote-tracking branch 'torvalds/master' into perf/core

To get upstream fixes.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This commit is contained in:
Arnaldo Carvalho de Melo 2022-07-27 11:08:48 -03:00
commit 40d02efad9
191 changed files with 2254 additions and 1225 deletions

View File

@ -135,6 +135,8 @@ Frank Rowand <frowand.list@gmail.com> <frowand@mvista.com>
Frank Zago <fzago@systemfabricworks.com> Frank Zago <fzago@systemfabricworks.com>
Gao Xiang <xiang@kernel.org> <gaoxiang25@huawei.com> Gao Xiang <xiang@kernel.org> <gaoxiang25@huawei.com>
Gao Xiang <xiang@kernel.org> <hsiangkao@aol.com> Gao Xiang <xiang@kernel.org> <hsiangkao@aol.com>
Gao Xiang <xiang@kernel.org> <hsiangkao@linux.alibaba.com>
Gao Xiang <xiang@kernel.org> <hsiangkao@redhat.com>
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@de.ibm.com> Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@de.ibm.com>
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com> Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com>
Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@linux.vnet.ibm.com> Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@linux.vnet.ibm.com>
@ -371,6 +373,7 @@ Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sebastian Reichel <sre@kernel.org> <sebastian.reichel@collabora.co.uk> Sebastian Reichel <sre@kernel.org> <sebastian.reichel@collabora.co.uk>
Sebastian Reichel <sre@kernel.org> <sre@debian.org> Sebastian Reichel <sre@kernel.org> <sre@debian.org>
Sedat Dilek <sedat.dilek@gmail.com> <sedat.dilek@credativ.de> Sedat Dilek <sedat.dilek@gmail.com> <sedat.dilek@credativ.de>
Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com>
Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com> Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com>
Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com> Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com>
Shuah Khan <shuah@kernel.org> <shuah.khan@hp.com> Shuah Khan <shuah@kernel.org> <shuah.khan@hp.com>

View File

@ -5796,6 +5796,24 @@
expediting. Set to zero to disable automatic expediting. Set to zero to disable automatic
expediting. expediting.
srcutree.srcu_max_nodelay [KNL]
Specifies the number of no-delay instances
per jiffy for which the SRCU grace period
worker thread will be rescheduled with zero
delay. Beyond this limit, worker thread will
be rescheduled with a sleep delay of one jiffy.
srcutree.srcu_max_nodelay_phase [KNL]
Specifies the per-grace-period phase, number of
non-sleeping polls of readers. Beyond this limit,
grace period worker thread will be rescheduled
with a sleep delay of one jiffy, between each
rescan of the readers, for a grace period phase.
srcutree.srcu_retry_check_delay [KNL]
Specifies number of microseconds of non-sleeping
delay between each non-sleeping poll of readers.
srcutree.small_contention_lim [KNL] srcutree.small_contention_lim [KNL]
Specifies the number of update-side contention Specifies the number of update-side contention
events per jiffy will be tolerated before events per jiffy will be tolerated before

View File

@ -503,26 +503,108 @@ per-port PHY specific details: interface connection, MDIO bus location, etc.
Driver development Driver development
================== ==================
DSA switch drivers need to implement a dsa_switch_ops structure which will DSA switch drivers need to implement a ``dsa_switch_ops`` structure which will
contain the various members described below. contain the various members described below.
``register_switch_driver()`` registers this dsa_switch_ops in its internal list Probing, registration and device lifetime
of drivers to probe for. ``unregister_switch_driver()`` does the exact opposite. -----------------------------------------
Unless requested differently by setting the priv_size member accordingly, DSA DSA switches are regular ``device`` structures on buses (be they platform, SPI,
does not allocate any driver private context space. I2C, MDIO or otherwise). The DSA framework is not involved in their probing
with the device core.
Switch registration from the perspective of a driver means passing a valid
``struct dsa_switch`` pointer to ``dsa_register_switch()``, usually from the
switch driver's probing function. The following members must be valid in the
provided structure:
- ``ds->dev``: will be used to parse the switch's OF node or platform data.
- ``ds->num_ports``: will be used to create the port list for this switch, and
to validate the port indices provided in the OF node.
- ``ds->ops``: a pointer to the ``dsa_switch_ops`` structure holding the DSA
method implementations.
- ``ds->priv``: backpointer to a driver-private data structure which can be
retrieved in all further DSA method callbacks.
In addition, the following flags in the ``dsa_switch`` structure may optionally
be configured to obtain driver-specific behavior from the DSA core. Their
behavior when set is documented through comments in ``include/net/dsa.h``.
- ``ds->vlan_filtering_is_global``
- ``ds->needs_standalone_vlan_filtering``
- ``ds->configure_vlan_while_not_filtering``
- ``ds->untag_bridge_pvid``
- ``ds->assisted_learning_on_cpu_port``
- ``ds->mtu_enforcement_ingress``
- ``ds->fdb_isolation``
Internally, DSA keeps an array of switch trees (group of switches) global to
the kernel, and attaches a ``dsa_switch`` structure to a tree on registration.
The tree ID to which the switch is attached is determined by the first u32
number of the ``dsa,member`` property of the switch's OF node (0 if missing).
The switch ID within the tree is determined by the second u32 number of the
same OF property (0 if missing). Registering multiple switches with the same
switch ID and tree ID is illegal and will cause an error. Using platform data,
a single switch and a single switch tree is permitted.
In case of a tree with multiple switches, probing takes place asymmetrically.
The first N-1 callers of ``dsa_register_switch()`` only add their ports to the
port list of the tree (``dst->ports``), each port having a backpointer to its
associated switch (``dp->ds``). Then, these switches exit their
``dsa_register_switch()`` call early, because ``dsa_tree_setup_routing_table()``
has determined that the tree is not yet complete (not all ports referenced by
DSA links are present in the tree's port list). The tree becomes complete when
the last switch calls ``dsa_register_switch()``, and this triggers the effective
continuation of initialization (including the call to ``ds->ops->setup()``) for
all switches within that tree, all as part of the calling context of the last
switch's probe function.
The opposite of registration takes place when calling ``dsa_unregister_switch()``,
which removes a switch's ports from the port list of the tree. The entire tree
is torn down when the first switch unregisters.
It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback
of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal
version of the full teardown performed by ``dsa_unregister_switch()``).
The reason is that DSA keeps a reference on the master net device, and if the
driver for the master device decides to unbind on shutdown, DSA's reference
will block that operation from finalizing.
Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called,
but not both, and the device driver model permits the bus' ``remove()`` method
to be called even if ``shutdown()`` was already called. Therefore, drivers are
expected to implement a mutual exclusion method between ``remove()`` and
``shutdown()`` by setting their drvdata to NULL after any of these has run, and
checking whether the drvdata is NULL before proceeding to take any action.
After ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` was called, no
further callbacks via the provided ``dsa_switch_ops`` may take place, and the
driver may free the data structures associated with the ``dsa_switch``.
Switch configuration Switch configuration
-------------------- --------------------
- ``tag_protocol``: this is to indicate what kind of tagging protocol is supported, - ``get_tag_protocol``: this is to indicate what kind of tagging protocol is
should be a valid value from the ``dsa_tag_protocol`` enum supported, should be a valid value from the ``dsa_tag_protocol`` enum.
The returned information does not have to be static; the driver is passed the
CPU port number, as well as the tagging protocol of a possibly stacked
upstream switch, in case there are hardware limitations in terms of supported
tag formats.
- ``probe``: probe routine which will be invoked by the DSA platform device upon - ``change_tag_protocol``: when the default tagging protocol has compatibility
registration to test for the presence/absence of a switch device. For MDIO problems with the master or other issues, the driver may support changing it
devices, it is recommended to issue a read towards internal registers using at runtime, either through a device tree property or through sysfs. In that
the switch pseudo-PHY and return whether this is a supported device. For other case, further calls to ``get_tag_protocol`` should report the protocol in
buses, return a non-NULL string current use.
- ``setup``: setup function for the switch, this function is responsible for setting - ``setup``: setup function for the switch, this function is responsible for setting
up the ``dsa_switch_ops`` private structure with all it needs: register maps, up the ``dsa_switch_ops`` private structure with all it needs: register maps,
@ -535,7 +617,17 @@ Switch configuration
fully configured and ready to serve any kind of request. It is recommended fully configured and ready to serve any kind of request. It is recommended
to issue a software reset of the switch during this setup function in order to to issue a software reset of the switch during this setup function in order to
avoid relying on what a previous software agent such as a bootloader/firmware avoid relying on what a previous software agent such as a bootloader/firmware
may have previously configured. may have previously configured. The method responsible for undoing any
applicable allocations or operations done here is ``teardown``.
- ``port_setup`` and ``port_teardown``: methods for initialization and
destruction of per-port data structures. It is mandatory for some operations
such as registering and unregistering devlink port regions to be done from
these methods, otherwise they are optional. A port will be torn down only if
it has been previously set up. It is possible for a port to be set up during
probing only to be torn down immediately afterwards, for example in case its
PHY cannot be found. In this case, probing of the DSA switch continues
without that particular port.
PHY devices and link management PHY devices and link management
------------------------------- -------------------------------
@ -635,26 +727,198 @@ Power management
``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is ``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
disabled while being a bridge member disabled while being a bridge member
Address databases
-----------------
Switching hardware is expected to have a table for FDB entries, however not all
of them are active at the same time. An address database is the subset (partition)
of FDB entries that is active (can be matched by address learning on RX, or FDB
lookup on TX) depending on the state of the port. An address database may
occasionally be called "FID" (Filtering ID) in this document, although the
underlying implementation may choose whatever is available to the hardware.
For example, all ports that belong to a VLAN-unaware bridge (which is
*currently* VLAN-unaware) are expected to learn source addresses in the
database associated by the driver with that bridge (and not with other
VLAN-unaware bridges). During forwarding and FDB lookup, a packet received on a
VLAN-unaware bridge port should be able to find a VLAN-unaware FDB entry having
the same MAC DA as the packet, which is present on another port member of the
same bridge. At the same time, the FDB lookup process must be able to not find
an FDB entry having the same MAC DA as the packet, if that entry points towards
a port which is a member of a different VLAN-unaware bridge (and is therefore
associated with a different address database).
Similarly, each VLAN of each offloaded VLAN-aware bridge should have an
associated address database, which is shared by all ports which are members of
that VLAN, but not shared by ports belonging to different bridges that are
members of the same VID.
In this context, a VLAN-unaware database means that all packets are expected to
match on it irrespective of VLAN ID (only MAC address lookup), whereas a
VLAN-aware database means that packets are supposed to match based on the VLAN
ID from the classified 802.1Q header (or the pvid if untagged).
At the bridge layer, VLAN-unaware FDB entries have the special VID value of 0,
whereas VLAN-aware FDB entries have non-zero VID values. Note that a
VLAN-unaware bridge may have VLAN-aware (non-zero VID) FDB entries, and a
VLAN-aware bridge may have VLAN-unaware FDB entries. As in hardware, the
software bridge keeps separate address databases, and offloads to hardware the
FDB entries belonging to these databases, through switchdev, asynchronously
relative to the moment when the databases become active or inactive.
When a user port operates in standalone mode, its driver should configure it to
use a separate database called a port private database. This is different from
the databases described above, and should impede operation as standalone port
(packet in, packet out to the CPU port) as little as possible. For example,
on ingress, it should not attempt to learn the MAC SA of ingress traffic, since
learning is a bridging layer service and this is a standalone port, therefore
it would consume useless space. With no address learning, the port private
database should be empty in a naive implementation, and in this case, all
received packets should be trivially flooded to the CPU port.
DSA (cascade) and CPU ports are also called "shared" ports because they service
multiple address databases, and the database that a packet should be associated
to is usually embedded in the DSA tag. This means that the CPU port may
simultaneously transport packets coming from a standalone port (which were
classified by hardware in one address database), and from a bridge port (which
were classified to a different address database).
Switch drivers which satisfy certain criteria are able to optimize the naive
configuration by removing the CPU port from the flooding domain of the switch,
and just program the hardware with FDB entries pointing towards the CPU port
for which it is known that software is interested in those MAC addresses.
Packets which do not match a known FDB entry will not be delivered to the CPU,
which will save CPU cycles required for creating an skb just to drop it.
DSA is able to perform host address filtering for the following kinds of
addresses:
- Primary unicast MAC addresses of ports (``dev->dev_addr``). These are
associated with the port private database of the respective user port,
and the driver is notified to install them through ``port_fdb_add`` towards
the CPU port.
- Secondary unicast and multicast MAC addresses of ports (addresses added
through ``dev_uc_add()`` and ``dev_mc_add()``). These are also associated
with the port private database of the respective user port.
- Local/permanent bridge FDB entries (``BR_FDB_LOCAL``). These are the MAC
addresses of the bridge ports, for which packets must be terminated locally
and not forwarded. They are associated with the address database for that
bridge.
- Static bridge FDB entries installed towards foreign (non-DSA) interfaces
present in the same bridge as some DSA switch ports. These are also
associated with the address database for that bridge.
- Dynamically learned FDB entries on foreign interfaces present in the same
bridge as some DSA switch ports, only if ``ds->assisted_learning_on_cpu_port``
is set to true by the driver. These are associated with the address database
for that bridge.
For various operations detailed below, DSA provides a ``dsa_db`` structure
which can be of the following types:
- ``DSA_DB_PORT``: the FDB (or MDB) entry to be installed or deleted belongs to
the port private database of user port ``db->dp``.
- ``DSA_DB_BRIDGE``: the entry belongs to one of the address databases of bridge
``db->bridge``. Separation between the VLAN-unaware database and the per-VID
databases of this bridge is expected to be done by the driver.
- ``DSA_DB_LAG``: the entry belongs to the address database of LAG ``db->lag``.
Note: ``DSA_DB_LAG`` is currently unused and may be removed in the future.
The drivers which act upon the ``dsa_db`` argument in ``port_fdb_add``,
``port_mdb_add`` etc should declare ``ds->fdb_isolation`` as true.
DSA associates each offloaded bridge and each offloaded LAG with a one-based ID
(``struct dsa_bridge :: num``, ``struct dsa_lag :: id``) for the purposes of
refcounting addresses on shared ports. Drivers may piggyback on DSA's numbering
scheme (the ID is readable through ``db->bridge.num`` and ``db->lag.id`` or may
implement their own.
Only the drivers which declare support for FDB isolation are notified of FDB
entries on the CPU port belonging to ``DSA_DB_PORT`` databases.
For compatibility/legacy reasons, ``DSA_DB_BRIDGE`` addresses are notified to
drivers even if they do not support FDB isolation. However, ``db->bridge.num``
and ``db->lag.id`` are always set to 0 in that case (to denote the lack of
isolation, for refcounting purposes).
Note that it is not mandatory for a switch driver to implement physically
separate address databases for each standalone user port. Since FDB entries in
the port private databases will always point to the CPU port, there is no risk
for incorrect forwarding decisions. In this case, all standalone ports may
share the same database, but the reference counting of host-filtered addresses
(not deleting the FDB entry for a port's MAC address if it's still in use by
another port) becomes the responsibility of the driver, because DSA is unaware
that the port databases are in fact shared. This can be achieved by calling
``dsa_fdb_present_in_other_db()`` and ``dsa_mdb_present_in_other_db()``.
The down side is that the RX filtering lists of each user port are in fact
shared, which means that user port A may accept a packet with a MAC DA it
shouldn't have, only because that MAC address was in the RX filtering list of
user port B. These packets will still be dropped in software, however.
Bridge layer Bridge layer
------------ ------------
Offloading the bridge forwarding plane is optional and handled by the methods
below. They may be absent, return -EOPNOTSUPP, or ``ds->max_num_bridges`` may
be non-zero and exceeded, and in this case, joining a bridge port is still
possible, but the packet forwarding will take place in software, and the ports
under a software bridge must remain configured in the same way as for
standalone operation, i.e. have all bridging service functions (address
learning etc) disabled, and send all received packets to the CPU port only.
Concretely, a port starts offloading the forwarding plane of a bridge once it
returns success to the ``port_bridge_join`` method, and stops doing so after
``port_bridge_leave`` has been called. Offloading the bridge means autonomously
learning FDB entries in accordance with the software bridge port's state, and
autonomously forwarding (or flooding) received packets without CPU intervention.
This is optional even when offloading a bridge port. Tagging protocol drivers
are expected to call ``dsa_default_offload_fwd_mark(skb)`` for packets which
have already been autonomously forwarded in the forwarding domain of the
ingress switch port. DSA, through ``dsa_port_devlink_setup()``, considers all
switch ports part of the same tree ID to be part of the same bridge forwarding
domain (capable of autonomous forwarding to each other).
Offloading the TX forwarding process of a bridge is a distinct concept from
simply offloading its forwarding plane, and refers to the ability of certain
driver and tag protocol combinations to transmit a single skb coming from the
bridge device's transmit function to potentially multiple egress ports (and
thereby avoid its cloning in software).
Packets for which the bridge requests this behavior are called data plane
packets and have ``skb->offload_fwd_mark`` set to true in the tag protocol
driver's ``xmit`` function. Data plane packets are subject to FDB lookup,
hardware learning on the CPU port, and do not override the port STP state.
Additionally, replication of data plane packets (multicast, flooding) is
handled in hardware and the bridge driver will transmit a single skb for each
packet that may or may not need replication.
When the TX forwarding offload is enabled, the tag protocol driver is
responsible to inject packets into the data plane of the hardware towards the
correct bridging domain (FID) that the port is a part of. The port may be
VLAN-unaware, and in this case the FID must be equal to the FID used by the
driver for its VLAN-unaware address database associated with that bridge.
Alternatively, the bridge may be VLAN-aware, and in that case, it is guaranteed
that the packet is also VLAN-tagged with the VLAN ID that the bridge processed
this packet in. It is the responsibility of the hardware to untag the VID on
the egress-untagged ports, or keep the tag on the egress-tagged ones.
- ``port_bridge_join``: bridge layer function invoked when a given switch port is - ``port_bridge_join``: bridge layer function invoked when a given switch port is
added to a bridge, this function should do what's necessary at the switch added to a bridge, this function should do what's necessary at the switch
level to permit the joining port to be added to the relevant logical level to permit the joining port to be added to the relevant logical
domain for it to ingress/egress traffic with other members of the bridge. domain for it to ingress/egress traffic with other members of the bridge.
By setting the ``tx_fwd_offload`` argument to true, the TX forwarding process
of this bridge is also offloaded.
- ``port_bridge_leave``: bridge layer function invoked when a given switch port is - ``port_bridge_leave``: bridge layer function invoked when a given switch port is
removed from a bridge, this function should do what's necessary at the removed from a bridge, this function should do what's necessary at the
switch level to deny the leaving port from ingress/egress traffic from the switch level to deny the leaving port from ingress/egress traffic from the
remaining bridge members. When the port leaves the bridge, it should be aged remaining bridge members.
out at the switch hardware for the switch to (re) learn MAC addresses behind
this port.
- ``port_stp_state_set``: bridge layer function invoked when a given switch port STP - ``port_stp_state_set``: bridge layer function invoked when a given switch port STP
state is computed by the bridge layer and should be propagated to switch state is computed by the bridge layer and should be propagated to switch
hardware to forward/block/learn traffic. The switch driver is responsible for hardware to forward/block/learn traffic.
computing a STP state change based on current and asked parameters and perform
the relevant ageing based on the intersection results
- ``port_bridge_flags``: bridge layer function invoked when a port must - ``port_bridge_flags``: bridge layer function invoked when a port must
configure its settings for e.g. flooding of unknown traffic or source address configure its settings for e.g. flooding of unknown traffic or source address
@ -667,21 +931,11 @@ Bridge layer
CPU port, and flooding towards the CPU port should also be enabled, due to a CPU port, and flooding towards the CPU port should also be enabled, due to a
lack of an explicit address filtering mechanism in the DSA core. lack of an explicit address filtering mechanism in the DSA core.
- ``port_bridge_tx_fwd_offload``: bridge layer function invoked after - ``port_fast_age``: bridge layer function invoked when flushing the
``port_bridge_join`` when a driver sets ``ds->num_fwd_offloading_bridges`` to dynamically learned FDB entries on the port is necessary. This is called when
a non-zero value. Returning success in this function activates the TX transitioning from an STP state where learning should take place to an STP
forwarding offload bridge feature for this port, which enables the tagging state where it shouldn't, or when leaving a bridge, or when address learning
protocol driver to inject data plane packets towards the bridging domain that is turned off via ``port_bridge_flags``.
the port is a part of. Data plane packets are subject to FDB lookup, hardware
learning on the CPU port, and do not override the port STP state.
Additionally, replication of data plane packets (multicast, flooding) is
handled in hardware and the bridge driver will transmit a single skb for each
packet that needs replication. The method is provided as a configuration
point for drivers that need to configure the hardware for enabling this
feature.
- ``port_bridge_tx_fwd_unoffload``: bridge layer function invoked when a driver
leaves a bridge port which had the TX forwarding offload feature enabled.
Bridge VLAN filtering Bridge VLAN filtering
--------------------- ---------------------
@ -697,55 +951,44 @@ Bridge VLAN filtering
allowed. allowed.
- ``port_vlan_add``: bridge layer function invoked when a VLAN is configured - ``port_vlan_add``: bridge layer function invoked when a VLAN is configured
(tagged or untagged) for the given switch port. If the operation is not (tagged or untagged) for the given switch port. The CPU port becomes a member
supported by the hardware, this function should return ``-EOPNOTSUPP`` to of a VLAN only if a foreign bridge port is also a member of it (and
inform the bridge code to fallback to a software implementation. forwarding needs to take place in software), or the VLAN is installed to the
VLAN group of the bridge device itself, for termination purposes
(``bridge vlan add dev br0 vid 100 self``). VLANs on shared ports are
reference counted and removed when there is no user left. Drivers do not need
to manually install a VLAN on the CPU port.
- ``port_vlan_del``: bridge layer function invoked when a VLAN is removed from the - ``port_vlan_del``: bridge layer function invoked when a VLAN is removed from the
given switch port given switch port
- ``port_vlan_dump``: bridge layer function invoked with a switchdev callback
function that the driver has to call for each VLAN the given port is a member
of. A switchdev object is used to carry the VID and bridge flags.
- ``port_fdb_add``: bridge layer function invoked when the bridge wants to install a - ``port_fdb_add``: bridge layer function invoked when the bridge wants to install a
Forwarding Database entry, the switch hardware should be programmed with the Forwarding Database entry, the switch hardware should be programmed with the
specified address in the specified VLAN Id in the forwarding database specified address in the specified VLAN Id in the forwarding database
associated with this VLAN ID. If the operation is not supported, this associated with this VLAN ID.
function should return ``-EOPNOTSUPP`` to inform the bridge code to fallback to
a software implementation.
.. note:: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be its port-based VLAN, used by the associated bridge device.
- ``port_fdb_del``: bridge layer function invoked when the bridge wants to remove a - ``port_fdb_del``: bridge layer function invoked when the bridge wants to remove a
Forwarding Database entry, the switch hardware should be programmed to delete Forwarding Database entry, the switch hardware should be programmed to delete
the specified MAC address from the specified VLAN ID if it was mapped into the specified MAC address from the specified VLAN ID if it was mapped into
this port forwarding database this port forwarding database
- ``port_fdb_dump``: bridge layer function invoked with a switchdev callback - ``port_fdb_dump``: bridge bypass function invoked by ``ndo_fdb_dump`` on the
function that the driver has to call for each MAC address known to be behind physical DSA port interfaces. Since DSA does not attempt to keep in sync its
the given port. A switchdev object is used to carry the VID and FDB info. hardware FDB entries with the software bridge, this method is implemented as
a means to view the entries visible on user ports in the hardware database.
The entries reported by this function have the ``self`` flag in the output of
the ``bridge fdb show`` command.
- ``port_mdb_add``: bridge layer function invoked when the bridge wants to install - ``port_mdb_add``: bridge layer function invoked when the bridge wants to install
a multicast database entry. If the operation is not supported, this function a multicast database entry. The switch hardware should be programmed with the
should return ``-EOPNOTSUPP`` to inform the bridge code to fallback to a
software implementation. The switch hardware should be programmed with the
specified address in the specified VLAN ID in the forwarding database specified address in the specified VLAN ID in the forwarding database
associated with this VLAN ID. associated with this VLAN ID.
.. note:: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be its port-based VLAN, used by the associated bridge device.
- ``port_mdb_del``: bridge layer function invoked when the bridge wants to remove a - ``port_mdb_del``: bridge layer function invoked when the bridge wants to remove a
multicast database entry, the switch hardware should be programmed to delete multicast database entry, the switch hardware should be programmed to delete
the specified MAC address from the specified VLAN ID if it was mapped into the specified MAC address from the specified VLAN ID if it was mapped into
this port forwarding database. this port forwarding database.
- ``port_mdb_dump``: bridge layer function invoked with a switchdev callback
function that the driver has to call for each MAC address known to be behind
the given port. A switchdev object is used to carry the VID and MDB info.
Link aggregation Link aggregation
---------------- ----------------

View File

@ -1052,11 +1052,7 @@ udp_rmem_min - INTEGER
Default: 4K Default: 4K
udp_wmem_min - INTEGER udp_wmem_min - INTEGER
Minimal size of send buffer used by UDP sockets in moderation. UDP does not have tx memory accounting and this tunable has no effect.
Each UDP socket is able to use the size for sending data, even if
total pages of UDP sockets exceed udp_mem pressure. The unit is byte.
Default: 4K
RAW variables RAW variables
============= =============

View File

@ -5658,7 +5658,7 @@ by a string of size ``name_size``.
#define KVM_STATS_UNIT_SECONDS (0x2 << KVM_STATS_UNIT_SHIFT) #define KVM_STATS_UNIT_SECONDS (0x2 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_CYCLES (0x3 << KVM_STATS_UNIT_SHIFT) #define KVM_STATS_UNIT_CYCLES (0x3 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_BOOLEAN (0x4 << KVM_STATS_UNIT_SHIFT) #define KVM_STATS_UNIT_BOOLEAN (0x4 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_MAX KVM_STATS_UNIT_CYCLES #define KVM_STATS_UNIT_MAX KVM_STATS_UNIT_BOOLEAN
#define KVM_STATS_BASE_SHIFT 8 #define KVM_STATS_BASE_SHIFT 8
#define KVM_STATS_BASE_MASK (0xF << KVM_STATS_BASE_SHIFT) #define KVM_STATS_BASE_MASK (0xF << KVM_STATS_BASE_SHIFT)

View File

@ -15849,7 +15849,7 @@ PIN CONTROLLER - FREESCALE
M: Dong Aisheng <aisheng.dong@nxp.com> M: Dong Aisheng <aisheng.dong@nxp.com>
M: Fabio Estevam <festevam@gmail.com> M: Fabio Estevam <festevam@gmail.com>
M: Shawn Guo <shawnguo@kernel.org> M: Shawn Guo <shawnguo@kernel.org>
M: Stefan Agner <stefan@agner.ch> M: Jacky Bai <ping.bai@nxp.com>
R: Pengutronix Kernel Team <kernel@pengutronix.de> R: Pengutronix Kernel Team <kernel@pengutronix.de>
L: linux-gpio@vger.kernel.org L: linux-gpio@vger.kernel.org
S: Maintained S: Maintained

View File

@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 19 PATCHLEVEL = 19
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc7 EXTRAVERSION = -rc8
NAME = Superb Owl NAME = Superb Owl
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -438,6 +438,13 @@ config MMU_GATHER_PAGE_SIZE
config MMU_GATHER_NO_RANGE config MMU_GATHER_NO_RANGE
bool bool
select MMU_GATHER_MERGE_VMAS
config MMU_GATHER_NO_FLUSH_CACHE
bool
config MMU_GATHER_MERGE_VMAS
bool
config MMU_GATHER_NO_GATHER config MMU_GATHER_NO_GATHER
bool bool

View File

@ -4,21 +4,6 @@
#define __ASM_CSKY_TLB_H #define __ASM_CSKY_TLB_H
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#define tlb_start_vma(tlb, vma) \
do { \
if (!(tlb)->fullmm) \
flush_cache_range(vma, (vma)->vm_start, (vma)->vm_end); \
} while (0)
#define tlb_end_vma(tlb, vma) \
do { \
if (!(tlb)->fullmm) \
flush_tlb_range(vma, (vma)->vm_start, (vma)->vm_end); \
} while (0)
#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
#include <asm-generic/tlb.h> #include <asm-generic/tlb.h>
#endif /* __ASM_CSKY_TLB_H */ #endif /* __ASM_CSKY_TLB_H */

View File

@ -108,6 +108,7 @@ config LOONGARCH
select TRACE_IRQFLAGS_SUPPORT select TRACE_IRQFLAGS_SUPPORT
select USE_PERCPU_NUMA_NODE_ID select USE_PERCPU_NUMA_NODE_ID
select ZONE_DMA32 select ZONE_DMA32
select MMU_GATHER_MERGE_VMAS if MMU
config 32BIT config 32BIT
bool bool

View File

@ -137,16 +137,6 @@ static inline void invtlb_all(u32 op, u32 info, u64 addr)
); );
} }
/*
* LoongArch doesn't need any special per-pte or per-vma handling, except
* we need to flush cache for area to be unmapped.
*/
#define tlb_start_vma(tlb, vma) \
do { \
if (!(tlb)->fullmm) \
flush_cache_range(vma, vma->vm_start, vma->vm_end); \
} while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
static void tlb_flush(struct mmu_gather *tlb); static void tlb_flush(struct mmu_gather *tlb);

View File

@ -256,6 +256,7 @@ config PPC
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select MMU_GATHER_PAGE_SIZE select MMU_GATHER_PAGE_SIZE
select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE if PPC64 || NOT_COHERENT_CACHE select NEED_DMA_MAP_STATE if PPC64 || NOT_COHERENT_CACHE
select NEED_PER_CPU_EMBED_FIRST_CHUNK if PPC64 select NEED_PER_CPU_EMBED_FIRST_CHUNK if PPC64

View File

@ -19,8 +19,6 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define __tlb_remove_tlb_entry __tlb_remove_tlb_entry #define __tlb_remove_tlb_entry __tlb_remove_tlb_entry
#define tlb_flush tlb_flush #define tlb_flush tlb_flush

View File

@ -73,6 +73,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
endif endif
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
# GCC versions that support the "-mstrict-align" option default to allowing # GCC versions that support the "-mstrict-align" option default to allowing
# unaligned accesses. While unaligned accesses are explicitly allowed in the # unaligned accesses. While unaligned accesses are explicitly allowed in the

View File

@ -35,7 +35,7 @@
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";
key0 { key {
label = "KEY0"; label = "KEY0";
linux,code = <BTN_0>; linux,code = <BTN_0>;
gpios = <&gpio0 10 GPIO_ACTIVE_LOW>; gpios = <&gpio0 10 GPIO_ACTIVE_LOW>;

View File

@ -47,7 +47,7 @@
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";
boot { key-boot {
label = "BOOT"; label = "BOOT";
linux,code = <BTN_0>; linux,code = <BTN_0>;
gpios = <&gpio0 0 GPIO_ACTIVE_LOW>; gpios = <&gpio0 0 GPIO_ACTIVE_LOW>;

View File

@ -52,7 +52,7 @@
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";
boot { key-boot {
label = "BOOT"; label = "BOOT";
linux,code = <BTN_0>; linux,code = <BTN_0>;
gpios = <&gpio0 0 GPIO_ACTIVE_LOW>; gpios = <&gpio0 0 GPIO_ACTIVE_LOW>;

View File

@ -46,19 +46,19 @@
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";
up { key-up {
label = "UP"; label = "UP";
linux,code = <BTN_1>; linux,code = <BTN_1>;
gpios = <&gpio1_0 7 GPIO_ACTIVE_LOW>; gpios = <&gpio1_0 7 GPIO_ACTIVE_LOW>;
}; };
press { key-press {
label = "PRESS"; label = "PRESS";
linux,code = <BTN_0>; linux,code = <BTN_0>;
gpios = <&gpio0 0 GPIO_ACTIVE_LOW>; gpios = <&gpio0 0 GPIO_ACTIVE_LOW>;
}; };
down { key-down {
label = "DOWN"; label = "DOWN";
linux,code = <BTN_2>; linux,code = <BTN_2>;
gpios = <&gpio0 1 GPIO_ACTIVE_LOW>; gpios = <&gpio0 1 GPIO_ACTIVE_LOW>;

View File

@ -23,7 +23,7 @@
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";
boot { key-boot {
label = "BOOT"; label = "BOOT";
linux,code = <BTN_0>; linux,code = <BTN_0>;
gpios = <&gpio0 0 GPIO_ACTIVE_LOW>; gpios = <&gpio0 0 GPIO_ACTIVE_LOW>;

View File

@ -78,7 +78,7 @@ obj-$(CONFIG_SMP) += cpu_ops_sbi.o
endif endif
obj-$(CONFIG_HOTPLUG_CPU) += cpu-hotplug.o obj-$(CONFIG_HOTPLUG_CPU) += cpu-hotplug.o
obj-$(CONFIG_KGDB) += kgdb.o obj-$(CONFIG_KGDB) += kgdb.o
obj-$(CONFIG_KEXEC) += kexec_relocate.o crash_save_regs.o machine_kexec.o obj-$(CONFIG_KEXEC_CORE) += kexec_relocate.o crash_save_regs.o machine_kexec.o
obj-$(CONFIG_KEXEC_FILE) += elf_kexec.o machine_kexec_file.o obj-$(CONFIG_KEXEC_FILE) += elf_kexec.o machine_kexec_file.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o

View File

@ -349,7 +349,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
{ {
const char *strtab, *name, *shstrtab; const char *strtab, *name, *shstrtab;
const Elf_Shdr *sechdrs; const Elf_Shdr *sechdrs;
Elf_Rela *relas; Elf64_Rela *relas;
int i, r_type; int i, r_type;
/* String & section header string table */ /* String & section header string table */

View File

@ -204,6 +204,7 @@ config S390
select IOMMU_SUPPORT if PCI select IOMMU_SUPPORT if PCI
select MMU_GATHER_NO_GATHER select MMU_GATHER_NO_GATHER
select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE if PCI select NEED_DMA_MAP_STATE if PCI
select NEED_SG_DMA_LENGTH if PCI select NEED_SG_DMA_LENGTH if PCI

View File

@ -2,7 +2,7 @@
/* /*
* Kernel interface for the s390 arch_random_* functions * Kernel interface for the s390 arch_random_* functions
* *
* Copyright IBM Corp. 2017, 2020 * Copyright IBM Corp. 2017, 2022
* *
* Author: Harald Freudenberger <freude@de.ibm.com> * Author: Harald Freudenberger <freude@de.ibm.com>
* *
@ -14,6 +14,7 @@
#ifdef CONFIG_ARCH_RANDOM #ifdef CONFIG_ARCH_RANDOM
#include <linux/static_key.h> #include <linux/static_key.h>
#include <linux/preempt.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/cpacf.h> #include <asm/cpacf.h>
@ -32,7 +33,8 @@ static inline bool __must_check arch_get_random_int(unsigned int *v)
static inline bool __must_check arch_get_random_seed_long(unsigned long *v) static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
{ {
if (static_branch_likely(&s390_arch_random_available)) { if (static_branch_likely(&s390_arch_random_available) &&
in_task()) {
cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
atomic64_add(sizeof(*v), &s390_arch_random_counter); atomic64_add(sizeof(*v), &s390_arch_random_counter);
return true; return true;
@ -42,7 +44,8 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
static inline bool __must_check arch_get_random_seed_int(unsigned int *v) static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
{ {
if (static_branch_likely(&s390_arch_random_available)) { if (static_branch_likely(&s390_arch_random_available) &&
in_task()) {
cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
atomic64_add(sizeof(*v), &s390_arch_random_counter); atomic64_add(sizeof(*v), &s390_arch_random_counter);
return true; return true;

View File

@ -27,9 +27,6 @@ static inline void tlb_flush(struct mmu_gather *tlb);
static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
struct page *page, int page_size); struct page *page, int page_size);
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define tlb_flush tlb_flush #define tlb_flush tlb_flush
#define pte_free_tlb pte_free_tlb #define pte_free_tlb pte_free_tlb
#define pmd_free_tlb pmd_free_tlb #define pmd_free_tlb pmd_free_tlb

View File

@ -67,6 +67,8 @@ config SPARC64
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_KPROBES select HAVE_KPROBES
select MMU_GATHER_RCU_TABLE_FREE if SMP select MMU_GATHER_RCU_TABLE_FREE if SMP
select MMU_GATHER_MERGE_VMAS
select MMU_GATHER_NO_FLUSH_CACHE
select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD

View File

@ -22,8 +22,6 @@ void smp_flush_tlb_mm(struct mm_struct *mm);
void __flush_tlb_pending(unsigned long, unsigned long, unsigned long *); void __flush_tlb_pending(unsigned long, unsigned long, unsigned long *);
void flush_tlb_pending(void); void flush_tlb_pending(void);
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define tlb_flush(tlb) flush_tlb_pending() #define tlb_flush(tlb) flush_tlb_pending()
/* /*

View File

@ -245,6 +245,7 @@ config X86
select HAVE_PERF_REGS select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP select HAVE_PERF_USER_STACK_DUMP
select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
select MMU_GATHER_MERGE_VMAS
select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_POSIX_CPU_TIMERS_TASK_WORK
select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RELIABLE_STACKTRACE if UNWINDER_ORC || STACK_VALIDATION select HAVE_RELIABLE_STACKTRACE if UNWINDER_ORC || STACK_VALIDATION
@ -2473,7 +2474,7 @@ config RETHUNK
bool "Enable return-thunks" bool "Enable return-thunks"
depends on RETPOLINE && CC_HAS_RETURN_THUNK depends on RETPOLINE && CC_HAS_RETURN_THUNK
select OBJTOOL if HAVE_OBJTOOL select OBJTOOL if HAVE_OBJTOOL
default y default y if X86_64
help help
Compile the kernel with the return-thunks compiler option to guard Compile the kernel with the return-thunks compiler option to guard
against kernel-to-user data leaks by avoiding return speculation. against kernel-to-user data leaks by avoiding return speculation.
@ -2482,21 +2483,21 @@ config RETHUNK
config CPU_UNRET_ENTRY config CPU_UNRET_ENTRY
bool "Enable UNRET on kernel entry" bool "Enable UNRET on kernel entry"
depends on CPU_SUP_AMD && RETHUNK depends on CPU_SUP_AMD && RETHUNK && X86_64
default y default y
help help
Compile the kernel with support for the retbleed=unret mitigation. Compile the kernel with support for the retbleed=unret mitigation.
config CPU_IBPB_ENTRY config CPU_IBPB_ENTRY
bool "Enable IBPB on kernel entry" bool "Enable IBPB on kernel entry"
depends on CPU_SUP_AMD depends on CPU_SUP_AMD && X86_64
default y default y
help help
Compile the kernel with support for the retbleed=ibpb mitigation. Compile the kernel with support for the retbleed=ibpb mitigation.
config CPU_IBRS_ENTRY config CPU_IBRS_ENTRY
bool "Enable IBRS on kernel entry" bool "Enable IBRS on kernel entry"
depends on CPU_SUP_INTEL depends on CPU_SUP_INTEL && X86_64
default y default y
help help
Compile the kernel with support for the spectre_v2=ibrs mitigation. Compile the kernel with support for the spectre_v2=ibrs mitigation.

View File

@ -27,6 +27,7 @@ RETHUNK_CFLAGS := -mfunction-return=thunk-extern
RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS) RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS)
endif endif
export RETHUNK_CFLAGS
export RETPOLINE_CFLAGS export RETPOLINE_CFLAGS
export RETPOLINE_VDSO_CFLAGS export RETPOLINE_VDSO_CFLAGS

View File

@ -278,9 +278,9 @@ enum {
}; };
/* /*
* For formats with LBR_TSX flags (e.g. LBR_FORMAT_EIP_FLAGS2), bits 61:62 in * For format LBR_FORMAT_EIP_FLAGS2, bits 61:62 in MSR_LAST_BRANCH_FROM_x
* MSR_LAST_BRANCH_FROM_x are the TSX flags when TSX is supported, but when * are the TSX flags when TSX is supported, but when TSX is not supported
* TSX is not supported they have no consistent behavior: * they have no consistent behavior:
* *
* - For wrmsr(), bits 61:62 are considered part of the sign extension. * - For wrmsr(), bits 61:62 are considered part of the sign extension.
* - For HW updates (branch captures) bits 61:62 are always OFF and are not * - For HW updates (branch captures) bits 61:62 are always OFF and are not
@ -288,7 +288,7 @@ enum {
* *
* Therefore, if: * Therefore, if:
* *
* 1) LBR has TSX format * 1) LBR format LBR_FORMAT_EIP_FLAGS2
* 2) CPU has no TSX support enabled * 2) CPU has no TSX support enabled
* *
* ... then any value passed to wrmsr() must be sign extended to 63 bits and any * ... then any value passed to wrmsr() must be sign extended to 63 bits and any
@ -300,7 +300,7 @@ static inline bool lbr_from_signext_quirk_needed(void)
bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) || bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) ||
boot_cpu_has(X86_FEATURE_RTM); boot_cpu_has(X86_FEATURE_RTM);
return !tsx_support && x86_pmu.lbr_has_tsx; return !tsx_support;
} }
static DEFINE_STATIC_KEY_FALSE(lbr_from_quirk_key); static DEFINE_STATIC_KEY_FALSE(lbr_from_quirk_key);
@ -1609,9 +1609,6 @@ void intel_pmu_lbr_init_hsw(void)
x86_pmu.lbr_sel_map = hsw_lbr_sel_map; x86_pmu.lbr_sel_map = hsw_lbr_sel_map;
x86_get_pmu(smp_processor_id())->task_ctx_cache = create_lbr_kmem_cache(size, 0); x86_get_pmu(smp_processor_id())->task_ctx_cache = create_lbr_kmem_cache(size, 0);
if (lbr_from_signext_quirk_needed())
static_branch_enable(&lbr_from_quirk_key);
} }
/* skylake */ /* skylake */
@ -1702,7 +1699,11 @@ void intel_pmu_lbr_init(void)
switch (x86_pmu.intel_cap.lbr_format) { switch (x86_pmu.intel_cap.lbr_format) {
case LBR_FORMAT_EIP_FLAGS2: case LBR_FORMAT_EIP_FLAGS2:
x86_pmu.lbr_has_tsx = 1; x86_pmu.lbr_has_tsx = 1;
fallthrough; x86_pmu.lbr_from_flags = 1;
if (lbr_from_signext_quirk_needed())
static_branch_enable(&lbr_from_quirk_key);
break;
case LBR_FORMAT_EIP_FLAGS: case LBR_FORMAT_EIP_FLAGS:
x86_pmu.lbr_from_flags = 1; x86_pmu.lbr_from_flags = 1;
break; break;

View File

@ -302,6 +302,7 @@
#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */
#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */

View File

@ -297,6 +297,8 @@ do { \
alternative_msr_write(MSR_IA32_SPEC_CTRL, \ alternative_msr_write(MSR_IA32_SPEC_CTRL, \
spec_ctrl_current() | SPEC_CTRL_IBRS, \ spec_ctrl_current() | SPEC_CTRL_IBRS, \
X86_FEATURE_USE_IBRS_FW); \ X86_FEATURE_USE_IBRS_FW); \
alternative_msr_write(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, \
X86_FEATURE_USE_IBPB_FW); \
} while (0) } while (0)
#define firmware_restrict_branch_speculation_end() \ #define firmware_restrict_branch_speculation_end() \

View File

@ -2,9 +2,6 @@
#ifndef _ASM_X86_TLB_H #ifndef _ASM_X86_TLB_H
#define _ASM_X86_TLB_H #define _ASM_X86_TLB_H
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define tlb_flush tlb_flush #define tlb_flush tlb_flush
static inline void tlb_flush(struct mmu_gather *tlb); static inline void tlb_flush(struct mmu_gather *tlb);

View File

@ -555,7 +555,9 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
dest = addr + insn.length + insn.immediate.value; dest = addr + insn.length + insn.immediate.value;
if (__static_call_fixup(addr, op, dest) || if (__static_call_fixup(addr, op, dest) ||
WARN_ON_ONCE(dest != &__x86_return_thunk)) WARN_ONCE(dest != &__x86_return_thunk,
"missing return thunk: %pS-%pS: %*ph",
addr, dest, 5, addr))
continue; continue;
DPRINTK("return thunk at: %pS (%px) len: %d to: %pS", DPRINTK("return thunk at: %pS (%px) len: %d to: %pS",

View File

@ -975,6 +975,7 @@ static inline const char *spectre_v2_module_string(void) { return ""; }
#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n" #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n" #define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
#define SPECTRE_V2_IBRS_PERF_MSG "WARNING: IBRS mitigation selected on Enhanced IBRS CPU, this may cause unnecessary performance loss\n"
#ifdef CONFIG_BPF_SYSCALL #ifdef CONFIG_BPF_SYSCALL
void unpriv_ebpf_notify(int new_state) void unpriv_ebpf_notify(int new_state)
@ -1415,6 +1416,8 @@ static void __init spectre_v2_select_mitigation(void)
case SPECTRE_V2_IBRS: case SPECTRE_V2_IBRS:
setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS);
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED))
pr_warn(SPECTRE_V2_IBRS_PERF_MSG);
break; break;
case SPECTRE_V2_LFENCE: case SPECTRE_V2_LFENCE:
@ -1516,7 +1519,16 @@ static void __init spectre_v2_select_mitigation(void)
* the CPU supports Enhanced IBRS, kernel might un-intentionally not * the CPU supports Enhanced IBRS, kernel might un-intentionally not
* enable IBRS around firmware calls. * enable IBRS around firmware calls.
*/ */
if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) { if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
(boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
if (retbleed_cmd != RETBLEED_CMD_IBPB) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
} else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n"); pr_info("Enabling Restricted Speculation for firmware calls\n");
} }

View File

@ -6029,6 +6029,11 @@ split_irqchip_unlock:
r = 0; r = 0;
break; break;
case KVM_CAP_X86_USER_SPACE_MSR: case KVM_CAP_X86_USER_SPACE_MSR:
r = -EINVAL;
if (cap->args[0] & ~(KVM_MSR_EXIT_REASON_INVAL |
KVM_MSR_EXIT_REASON_UNKNOWN |
KVM_MSR_EXIT_REASON_FILTER))
break;
kvm->arch.user_space_msr_mask = cap->args[0]; kvm->arch.user_space_msr_mask = cap->args[0];
r = 0; r = 0;
break; break;
@ -6183,6 +6188,9 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
if (copy_from_user(&filter, user_msr_filter, sizeof(filter))) if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
return -EFAULT; return -EFAULT;
if (filter.flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
return -EINVAL;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) for (i = 0; i < ARRAY_SIZE(filter.ranges); i++)
empty &= !filter.ranges[i].nmsrs; empty &= !filter.ranges[i].nmsrs;

View File

@ -43,6 +43,7 @@ config SYSTEM_TRUSTED_KEYRING
bool "Provide system-wide ring of trusted keys" bool "Provide system-wide ring of trusted keys"
depends on KEYS depends on KEYS
depends on ASYMMETRIC_KEY_TYPE depends on ASYMMETRIC_KEY_TYPE
depends on X509_CERTIFICATE_PARSER
help help
Provide a system keyring to which trusted keys can be added. Keys in Provide a system keyring to which trusted keys can be added. Keys in
the keyring are considered to be trusted. Keys may be added at will the keyring are considered to be trusted. Keys may be added at will

View File

@ -782,6 +782,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
if (!osc_cpc_flexible_adr_space_confirmed) { if (!osc_cpc_flexible_adr_space_confirmed) {
pr_debug("Flexible address space capability not supported\n"); pr_debug("Flexible address space capability not supported\n");
if (!cpc_supported_by_cpu())
goto out_free; goto out_free;
} }
@ -809,6 +810,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
} }
if (!osc_cpc_flexible_adr_space_confirmed) { if (!osc_cpc_flexible_adr_space_confirmed) {
pr_debug("Flexible address space capability not supported\n"); pr_debug("Flexible address space capability not supported\n");
if (!cpc_supported_by_cpu())
goto out_free; goto out_free;
} }
} else { } else {

View File

@ -213,7 +213,7 @@ static int lan966x_gate_clk_register(struct device *dev,
hw_data->hws[i] = hw_data->hws[i] =
devm_clk_hw_register_gate(dev, clk_gate_desc[idx].name, devm_clk_hw_register_gate(dev, clk_gate_desc[idx].name,
"lan966x", 0, base, "lan966x", 0, gate_base,
clk_gate_desc[idx].bit_idx, clk_gate_desc[idx].bit_idx,
0, &clk_gate_lock); 0, &clk_gate_lock);

View File

@ -351,6 +351,9 @@ static const struct regmap_config pca953x_i2c_regmap = {
.reg_bits = 8, .reg_bits = 8,
.val_bits = 8, .val_bits = 8,
.use_single_read = true,
.use_single_write = true,
.readable_reg = pca953x_readable_register, .readable_reg = pca953x_readable_register,
.writeable_reg = pca953x_writeable_register, .writeable_reg = pca953x_writeable_register,
.volatile_reg = pca953x_volatile_register, .volatile_reg = pca953x_volatile_register,
@ -906,15 +909,18 @@ static int pca953x_irq_setup(struct pca953x_chip *chip,
static int device_pca95xx_init(struct pca953x_chip *chip, u32 invert) static int device_pca95xx_init(struct pca953x_chip *chip, u32 invert)
{ {
DECLARE_BITMAP(val, MAX_LINE); DECLARE_BITMAP(val, MAX_LINE);
u8 regaddr;
int ret; int ret;
ret = regcache_sync_region(chip->regmap, chip->regs->output, regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0);
chip->regs->output + NBANK(chip)); ret = regcache_sync_region(chip->regmap, regaddr,
regaddr + NBANK(chip) - 1);
if (ret) if (ret)
goto out; goto out;
ret = regcache_sync_region(chip->regmap, chip->regs->direction, regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0);
chip->regs->direction + NBANK(chip)); ret = regcache_sync_region(chip->regmap, regaddr,
regaddr + NBANK(chip) - 1);
if (ret) if (ret)
goto out; goto out;
@ -1127,14 +1133,14 @@ static int pca953x_regcache_sync(struct device *dev)
* sync these registers first and only then sync the rest. * sync these registers first and only then sync the rest.
*/ */
regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0); regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0);
ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip)); ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip) - 1);
if (ret) { if (ret) {
dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret); dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret);
return ret; return ret;
} }
regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0); regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0);
ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip)); ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip) - 1);
if (ret) { if (ret) {
dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret); dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret);
return ret; return ret;
@ -1144,7 +1150,7 @@ static int pca953x_regcache_sync(struct device *dev)
if (chip->driver_data & PCA_PCAL) { if (chip->driver_data & PCA_PCAL) {
regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0); regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0);
ret = regcache_sync_region(chip->regmap, regaddr, ret = regcache_sync_region(chip->regmap, regaddr,
regaddr + NBANK(chip)); regaddr + NBANK(chip) - 1);
if (ret) { if (ret) {
dev_err(dev, "Failed to sync INT latch registers: %d\n", dev_err(dev, "Failed to sync INT latch registers: %d\n",
ret); ret);
@ -1153,7 +1159,7 @@ static int pca953x_regcache_sync(struct device *dev)
regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0); regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0);
ret = regcache_sync_region(chip->regmap, regaddr, ret = regcache_sync_region(chip->regmap, regaddr,
regaddr + NBANK(chip)); regaddr + NBANK(chip) - 1);
if (ret) { if (ret) {
dev_err(dev, "Failed to sync INT mask registers: %d\n", dev_err(dev, "Failed to sync INT mask registers: %d\n",
ret); ret);

View File

@ -99,7 +99,7 @@ static inline void xgpio_set_value32(unsigned long *map, int bit, u32 v)
const unsigned long offset = (bit % BITS_PER_LONG) & BIT(5); const unsigned long offset = (bit % BITS_PER_LONG) & BIT(5);
map[index] &= ~(0xFFFFFFFFul << offset); map[index] &= ~(0xFFFFFFFFul << offset);
map[index] |= v << offset; map[index] |= (unsigned long)v << offset;
} }
static inline int xgpio_regoffset(struct xgpio_instance *chip, int ch) static inline int xgpio_regoffset(struct xgpio_instance *chip, int ch)

View File

@ -1364,16 +1364,10 @@ void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
struct amdgpu_vm *vm) struct amdgpu_vm *vm)
{ {
struct amdkfd_process_info *process_info = vm->process_info; struct amdkfd_process_info *process_info = vm->process_info;
struct amdgpu_bo *pd = vm->root.bo;
if (!process_info) if (!process_info)
return; return;
/* Release eviction fence from PD */
amdgpu_bo_reserve(pd, false);
amdgpu_bo_fence(pd, NULL, false);
amdgpu_bo_unreserve(pd);
/* Update process info */ /* Update process info */
mutex_lock(&process_info->lock); mutex_lock(&process_info->lock);
process_info->n_vms--; process_info->n_vms--;

View File

@ -40,7 +40,7 @@ static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)
{ {
struct amdgpu_bo_list *list = container_of(rcu, struct amdgpu_bo_list, struct amdgpu_bo_list *list = container_of(rcu, struct amdgpu_bo_list,
rhead); rhead);
mutex_destroy(&list->bo_list_mutex);
kvfree(list); kvfree(list);
} }
@ -136,6 +136,7 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
trace_amdgpu_cs_bo_status(list->num_entries, total_size); trace_amdgpu_cs_bo_status(list->num_entries, total_size);
mutex_init(&list->bo_list_mutex);
*result = list; *result = list;
return 0; return 0;

View File

@ -47,6 +47,10 @@ struct amdgpu_bo_list {
struct amdgpu_bo *oa_obj; struct amdgpu_bo *oa_obj;
unsigned first_userptr; unsigned first_userptr;
unsigned num_entries; unsigned num_entries;
/* Protect access during command submission.
*/
struct mutex bo_list_mutex;
}; };
int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id, int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,

View File

@ -519,6 +519,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
return r; return r;
} }
mutex_lock(&p->bo_list->bo_list_mutex);
/* One for TTM and one for the CS job */ /* One for TTM and one for the CS job */
amdgpu_bo_list_for_each_entry(e, p->bo_list) amdgpu_bo_list_for_each_entry(e, p->bo_list)
e->tv.num_shared = 2; e->tv.num_shared = 2;
@ -651,6 +653,7 @@ out_free_user_pages:
kvfree(e->user_pages); kvfree(e->user_pages);
e->user_pages = NULL; e->user_pages = NULL;
} }
mutex_unlock(&p->bo_list->bo_list_mutex);
} }
return r; return r;
} }
@ -690,9 +693,11 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
{ {
unsigned i; unsigned i;
if (error && backoff) if (error && backoff) {
ttm_eu_backoff_reservation(&parser->ticket, ttm_eu_backoff_reservation(&parser->ticket,
&parser->validated); &parser->validated);
mutex_unlock(&parser->bo_list->bo_list_mutex);
}
for (i = 0; i < parser->num_post_deps; i++) { for (i = 0; i < parser->num_post_deps; i++) {
drm_syncobj_put(parser->post_deps[i].syncobj); drm_syncobj_put(parser->post_deps[i].syncobj);
@ -832,13 +837,17 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
continue; continue;
r = amdgpu_vm_bo_update(adev, bo_va, false); r = amdgpu_vm_bo_update(adev, bo_va, false);
if (r) if (r) {
mutex_unlock(&p->bo_list->bo_list_mutex);
return r; return r;
}
r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
if (r) if (r) {
mutex_unlock(&p->bo_list->bo_list_mutex);
return r; return r;
} }
}
r = amdgpu_vm_handle_moved(adev, vm); r = amdgpu_vm_handle_moved(adev, vm);
if (r) if (r)
@ -1278,6 +1287,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence); ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
mutex_unlock(&p->adev->notifier_lock); mutex_unlock(&p->adev->notifier_lock);
mutex_unlock(&p->bo_list->bo_list_mutex);
return 0; return 0;

View File

@ -1653,7 +1653,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY) #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
adev->dm.crc_rd_wrk = amdgpu_dm_crtc_secure_display_create_work(); adev->dm.crc_rd_wrk = amdgpu_dm_crtc_secure_display_create_work();
#endif #endif
if (dc_enable_dmub_notifications(adev->dm.dc)) { if (dc_is_dmub_outbox_supported(adev->dm.dc)) {
init_completion(&adev->dm.dmub_aux_transfer_done); init_completion(&adev->dm.dmub_aux_transfer_done);
adev->dm.dmub_notify = kzalloc(sizeof(struct dmub_notification), GFP_KERNEL); adev->dm.dmub_notify = kzalloc(sizeof(struct dmub_notification), GFP_KERNEL);
if (!adev->dm.dmub_notify) { if (!adev->dm.dmub_notify) {
@ -1689,6 +1689,13 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error; goto error;
} }
/* Enable outbox notification only after IRQ handlers are registered and DMUB is alive.
* It is expected that DMUB will resend any pending notifications at this point, for
* example HPD from DPIA.
*/
if (dc_is_dmub_outbox_supported(adev->dm.dc))
dc_enable_dmub_outbox(adev->dm.dc);
/* create fake encoders for MST */ /* create fake encoders for MST */
dm_dp_create_fake_mst_encoders(adev); dm_dp_create_fake_mst_encoders(adev);
@ -2678,9 +2685,6 @@ static int dm_resume(void *handle)
*/ */
link_enc_cfg_copy(adev->dm.dc->current_state, dc_state); link_enc_cfg_copy(adev->dm.dc->current_state, dc_state);
if (dc_enable_dmub_notifications(adev->dm.dc))
amdgpu_dm_outbox_init(adev);
r = dm_dmub_hw_init(adev); r = dm_dmub_hw_init(adev);
if (r) if (r)
DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r); DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r);
@ -2698,6 +2702,11 @@ static int dm_resume(void *handle)
} }
} }
if (dc_is_dmub_outbox_supported(adev->dm.dc)) {
amdgpu_dm_outbox_init(adev);
dc_enable_dmub_outbox(adev->dm.dc);
}
WARN_ON(!dc_commit_state(dm->dc, dc_state)); WARN_ON(!dc_commit_state(dm->dc, dc_state));
dm_gpureset_commit_state(dm->cached_dc_state, dm); dm_gpureset_commit_state(dm->cached_dc_state, dm);
@ -2719,13 +2728,15 @@ static int dm_resume(void *handle)
/* TODO: Remove dc_state->dccg, use dc->dccg directly. */ /* TODO: Remove dc_state->dccg, use dc->dccg directly. */
dc_resource_state_construct(dm->dc, dm_state->context); dc_resource_state_construct(dm->dc, dm_state->context);
/* Re-enable outbox interrupts for DPIA. */
if (dc_enable_dmub_notifications(adev->dm.dc))
amdgpu_dm_outbox_init(adev);
/* Before powering on DC we need to re-initialize DMUB. */ /* Before powering on DC we need to re-initialize DMUB. */
dm_dmub_hw_resume(adev); dm_dmub_hw_resume(adev);
/* Re-enable outbox interrupts for DPIA. */
if (dc_is_dmub_outbox_supported(adev->dm.dc)) {
amdgpu_dm_outbox_init(adev);
dc_enable_dmub_outbox(adev->dm.dc);
}
/* power on hardware */ /* power on hardware */
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0); dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);

View File

@ -64,8 +64,13 @@ int drm_gem_ttm_vmap(struct drm_gem_object *gem,
struct iosys_map *map) struct iosys_map *map)
{ {
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
int ret;
return ttm_bo_vmap(bo, map); dma_resv_lock(gem->resv, NULL);
ret = ttm_bo_vmap(bo, map);
dma_resv_unlock(gem->resv);
return ret;
} }
EXPORT_SYMBOL(drm_gem_ttm_vmap); EXPORT_SYMBOL(drm_gem_ttm_vmap);
@ -82,7 +87,9 @@ void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
{ {
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
dma_resv_lock(gem->resv, NULL);
ttm_bo_vunmap(bo, map); ttm_bo_vunmap(bo, map);
dma_resv_unlock(gem->resv);
} }
EXPORT_SYMBOL(drm_gem_ttm_vunmap); EXPORT_SYMBOL(drm_gem_ttm_vunmap);

View File

@ -273,10 +273,17 @@ struct intel_context {
u8 child_index; u8 child_index;
/** @guc: GuC specific members for parallel submission */ /** @guc: GuC specific members for parallel submission */
struct { struct {
/** @wqi_head: head pointer in work queue */ /** @wqi_head: cached head pointer in work queue */
u16 wqi_head; u16 wqi_head;
/** @wqi_tail: tail pointer in work queue */ /** @wqi_tail: cached tail pointer in work queue */
u16 wqi_tail; u16 wqi_tail;
/** @wq_head: pointer to the actual head in work queue */
u32 *wq_head;
/** @wq_tail: pointer to the actual head in work queue */
u32 *wq_tail;
/** @wq_status: pointer to the status in work queue */
u32 *wq_status;
/** /**
* @parent_page: page in context state (ce->state) used * @parent_page: page in context state (ce->state) used
* by parent for work queue, process descriptor * by parent for work queue, process descriptor

View File

@ -661,6 +661,16 @@ static inline void execlists_schedule_out(struct i915_request *rq)
i915_request_put(rq); i915_request_put(rq);
} }
static u32 map_i915_prio_to_lrc_desc_prio(int prio)
{
if (prio > I915_PRIORITY_NORMAL)
return GEN12_CTX_PRIORITY_HIGH;
else if (prio < I915_PRIORITY_NORMAL)
return GEN12_CTX_PRIORITY_LOW;
else
return GEN12_CTX_PRIORITY_NORMAL;
}
static u64 execlists_update_context(struct i915_request *rq) static u64 execlists_update_context(struct i915_request *rq)
{ {
struct intel_context *ce = rq->context; struct intel_context *ce = rq->context;
@ -669,7 +679,7 @@ static u64 execlists_update_context(struct i915_request *rq)
desc = ce->lrc.desc; desc = ce->lrc.desc;
if (rq->engine->flags & I915_ENGINE_HAS_EU_PRIORITY) if (rq->engine->flags & I915_ENGINE_HAS_EU_PRIORITY)
desc |= lrc_desc_priority(rq_prio(rq)); desc |= map_i915_prio_to_lrc_desc_prio(rq_prio(rq));
/* /*
* WaIdleLiteRestore:bdw,skl * WaIdleLiteRestore:bdw,skl

View File

@ -111,16 +111,6 @@ enum {
#define XEHP_SW_COUNTER_SHIFT 58 #define XEHP_SW_COUNTER_SHIFT 58
#define XEHP_SW_COUNTER_WIDTH 6 #define XEHP_SW_COUNTER_WIDTH 6
static inline u32 lrc_desc_priority(int prio)
{
if (prio > I915_PRIORITY_NORMAL)
return GEN12_CTX_PRIORITY_HIGH;
else if (prio < I915_PRIORITY_NORMAL)
return GEN12_CTX_PRIORITY_LOW;
else
return GEN12_CTX_PRIORITY_NORMAL;
}
static inline void lrc_runtime_start(struct intel_context *ce) static inline void lrc_runtime_start(struct intel_context *ce)
{ {
struct intel_context_stats *stats = &ce->stats; struct intel_context_stats *stats = &ce->stats;

View File

@ -122,6 +122,9 @@ enum intel_guc_action {
INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002, INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
INTEL_GUC_ACTION_SCHED_ENGINE_MODE_SET = 0x1003, INTEL_GUC_ACTION_SCHED_ENGINE_MODE_SET = 0x1003,
INTEL_GUC_ACTION_SCHED_ENGINE_MODE_DONE = 0x1004, INTEL_GUC_ACTION_SCHED_ENGINE_MODE_DONE = 0x1004,
INTEL_GUC_ACTION_V69_SET_CONTEXT_PRIORITY = 0x1005,
INTEL_GUC_ACTION_V69_SET_CONTEXT_EXECUTION_QUANTUM = 0x1006,
INTEL_GUC_ACTION_V69_SET_CONTEXT_PREEMPTION_TIMEOUT = 0x1007,
INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008, INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009, INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
INTEL_GUC_ACTION_HOST2GUC_UPDATE_CONTEXT_POLICIES = 0x100B, INTEL_GUC_ACTION_HOST2GUC_UPDATE_CONTEXT_POLICIES = 0x100B,

View File

@ -170,6 +170,11 @@ struct intel_guc {
/** @ads_engine_usage_size: size of engine usage in the ADS */ /** @ads_engine_usage_size: size of engine usage in the ADS */
u32 ads_engine_usage_size; u32 ads_engine_usage_size;
/** @lrc_desc_pool_v69: object allocated to hold the GuC LRC descriptor pool */
struct i915_vma *lrc_desc_pool_v69;
/** @lrc_desc_pool_vaddr_v69: contents of the GuC LRC descriptor pool */
void *lrc_desc_pool_vaddr_v69;
/** /**
* @context_lookup: used to resolve intel_context from guc_id, if a * @context_lookup: used to resolve intel_context from guc_id, if a
* context is present in this structure it is registered with the GuC * context is present in this structure it is registered with the GuC

View File

@ -203,6 +203,20 @@ struct guc_wq_item {
u32 fence_id; u32 fence_id;
} __packed; } __packed;
struct guc_process_desc_v69 {
u32 stage_id;
u64 db_base_addr;
u32 head;
u32 tail;
u32 error_offset;
u64 wq_base_addr;
u32 wq_size_bytes;
u32 wq_status;
u32 engine_presence;
u32 priority;
u32 reserved[36];
} __packed;
struct guc_sched_wq_desc { struct guc_sched_wq_desc {
u32 head; u32 head;
u32 tail; u32 tail;
@ -227,6 +241,37 @@ struct guc_ctxt_registration_info {
}; };
#define CONTEXT_REGISTRATION_FLAG_KMD BIT(0) #define CONTEXT_REGISTRATION_FLAG_KMD BIT(0)
/* Preempt to idle on quantum expiry */
#define CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE_V69 BIT(0)
/*
* GuC Context registration descriptor.
* FIXME: This is only required to exist during context registration.
* The current 1:1 between guc_lrc_desc and LRCs for the lifetime of the LRC
* is not required.
*/
struct guc_lrc_desc_v69 {
u32 hw_context_desc;
u32 slpm_perf_mode_hint; /* SPLC v1 only */
u32 slpm_freq_hint;
u32 engine_submit_mask; /* In logical space */
u8 engine_class;
u8 reserved0[3];
u32 priority;
u32 process_desc;
u32 wq_addr;
u32 wq_size;
u32 context_flags; /* CONTEXT_REGISTRATION_* */
/* Time for one workload to execute. (in micro seconds) */
u32 execution_quantum;
/* Time to wait for a preemption request to complete before issuing a
* reset. (in micro seconds).
*/
u32 preemption_timeout;
u32 policy_flags; /* CONTEXT_POLICY_* */
u32 reserved1[19];
} __packed;
/* 32-bit KLV structure as used by policy updates and others */ /* 32-bit KLV structure as used by policy updates and others */
struct guc_klv_generic_dw_t { struct guc_klv_generic_dw_t {
u32 kl; u32 kl;

View File

@ -414,12 +414,15 @@ struct sync_semaphore {
}; };
struct parent_scratch { struct parent_scratch {
union guc_descs {
struct guc_sched_wq_desc wq_desc; struct guc_sched_wq_desc wq_desc;
struct guc_process_desc_v69 pdesc;
} descs;
struct sync_semaphore go; struct sync_semaphore go;
struct sync_semaphore join[MAX_ENGINE_INSTANCE + 1]; struct sync_semaphore join[MAX_ENGINE_INSTANCE + 1];
u8 unused[WQ_OFFSET - sizeof(struct guc_sched_wq_desc) - u8 unused[WQ_OFFSET - sizeof(union guc_descs) -
sizeof(struct sync_semaphore) * (MAX_ENGINE_INSTANCE + 2)]; sizeof(struct sync_semaphore) * (MAX_ENGINE_INSTANCE + 2)];
u32 wq[WQ_SIZE / sizeof(u32)]; u32 wq[WQ_SIZE / sizeof(u32)];
@ -456,17 +459,23 @@ __get_parent_scratch(struct intel_context *ce)
LRC_STATE_OFFSET) / sizeof(u32))); LRC_STATE_OFFSET) / sizeof(u32)));
} }
static struct guc_sched_wq_desc * static struct guc_process_desc_v69 *
__get_wq_desc(struct intel_context *ce) __get_process_desc_v69(struct intel_context *ce)
{ {
struct parent_scratch *ps = __get_parent_scratch(ce); struct parent_scratch *ps = __get_parent_scratch(ce);
return &ps->wq_desc; return &ps->descs.pdesc;
} }
static u32 *get_wq_pointer(struct guc_sched_wq_desc *wq_desc, static struct guc_sched_wq_desc *
struct intel_context *ce, __get_wq_desc_v70(struct intel_context *ce)
u32 wqi_size) {
struct parent_scratch *ps = __get_parent_scratch(ce);
return &ps->descs.wq_desc;
}
static u32 *get_wq_pointer(struct intel_context *ce, u32 wqi_size)
{ {
/* /*
* Check for space in work queue. Caching a value of head pointer in * Check for space in work queue. Caching a value of head pointer in
@ -476,7 +485,7 @@ static u32 *get_wq_pointer(struct guc_sched_wq_desc *wq_desc,
#define AVAILABLE_SPACE \ #define AVAILABLE_SPACE \
CIRC_SPACE(ce->parallel.guc.wqi_tail, ce->parallel.guc.wqi_head, WQ_SIZE) CIRC_SPACE(ce->parallel.guc.wqi_tail, ce->parallel.guc.wqi_head, WQ_SIZE)
if (wqi_size > AVAILABLE_SPACE) { if (wqi_size > AVAILABLE_SPACE) {
ce->parallel.guc.wqi_head = READ_ONCE(wq_desc->head); ce->parallel.guc.wqi_head = READ_ONCE(*ce->parallel.guc.wq_head);
if (wqi_size > AVAILABLE_SPACE) if (wqi_size > AVAILABLE_SPACE)
return NULL; return NULL;
@ -495,11 +504,55 @@ static inline struct intel_context *__get_context(struct intel_guc *guc, u32 id)
return ce; return ce;
} }
static struct guc_lrc_desc_v69 *__get_lrc_desc_v69(struct intel_guc *guc, u32 index)
{
struct guc_lrc_desc_v69 *base = guc->lrc_desc_pool_vaddr_v69;
if (!base)
return NULL;
GEM_BUG_ON(index >= GUC_MAX_CONTEXT_ID);
return &base[index];
}
static int guc_lrc_desc_pool_create_v69(struct intel_guc *guc)
{
u32 size;
int ret;
size = PAGE_ALIGN(sizeof(struct guc_lrc_desc_v69) *
GUC_MAX_CONTEXT_ID);
ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool_v69,
(void **)&guc->lrc_desc_pool_vaddr_v69);
if (ret)
return ret;
return 0;
}
static void guc_lrc_desc_pool_destroy_v69(struct intel_guc *guc)
{
if (!guc->lrc_desc_pool_vaddr_v69)
return;
guc->lrc_desc_pool_vaddr_v69 = NULL;
i915_vma_unpin_and_release(&guc->lrc_desc_pool_v69, I915_VMA_RELEASE_MAP);
}
static inline bool guc_submission_initialized(struct intel_guc *guc) static inline bool guc_submission_initialized(struct intel_guc *guc)
{ {
return guc->submission_initialized; return guc->submission_initialized;
} }
static inline void _reset_lrc_desc_v69(struct intel_guc *guc, u32 id)
{
struct guc_lrc_desc_v69 *desc = __get_lrc_desc_v69(guc, id);
if (desc)
memset(desc, 0, sizeof(*desc));
}
static inline bool ctx_id_mapped(struct intel_guc *guc, u32 id) static inline bool ctx_id_mapped(struct intel_guc *guc, u32 id)
{ {
return __get_context(guc, id); return __get_context(guc, id);
@ -526,6 +579,8 @@ static inline void clr_ctx_id_mapping(struct intel_guc *guc, u32 id)
if (unlikely(!guc_submission_initialized(guc))) if (unlikely(!guc_submission_initialized(guc)))
return; return;
_reset_lrc_desc_v69(guc, id);
/* /*
* xarray API doesn't have xa_erase_irqsave wrapper, so calling * xarray API doesn't have xa_erase_irqsave wrapper, so calling
* the lower level functions directly. * the lower level functions directly.
@ -611,7 +666,7 @@ int intel_guc_wait_for_idle(struct intel_guc *guc, long timeout)
true, timeout); true, timeout);
} }
static int guc_context_policy_init(struct intel_context *ce, bool loop); static int guc_context_policy_init_v70(struct intel_context *ce, bool loop);
static int try_context_registration(struct intel_context *ce, bool loop); static int try_context_registration(struct intel_context *ce, bool loop);
static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq) static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq)
@ -639,7 +694,7 @@ static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq)
GEM_BUG_ON(context_guc_id_invalid(ce)); GEM_BUG_ON(context_guc_id_invalid(ce));
if (context_policy_required(ce)) { if (context_policy_required(ce)) {
err = guc_context_policy_init(ce, false); err = guc_context_policy_init_v70(ce, false);
if (err) if (err)
return err; return err;
} }
@ -737,9 +792,7 @@ static u32 wq_space_until_wrap(struct intel_context *ce)
return (WQ_SIZE - ce->parallel.guc.wqi_tail); return (WQ_SIZE - ce->parallel.guc.wqi_tail);
} }
static void write_wqi(struct guc_sched_wq_desc *wq_desc, static void write_wqi(struct intel_context *ce, u32 wqi_size)
struct intel_context *ce,
u32 wqi_size)
{ {
BUILD_BUG_ON(!is_power_of_2(WQ_SIZE)); BUILD_BUG_ON(!is_power_of_2(WQ_SIZE));
@ -750,13 +803,12 @@ static void write_wqi(struct guc_sched_wq_desc *wq_desc,
ce->parallel.guc.wqi_tail = (ce->parallel.guc.wqi_tail + wqi_size) & ce->parallel.guc.wqi_tail = (ce->parallel.guc.wqi_tail + wqi_size) &
(WQ_SIZE - 1); (WQ_SIZE - 1);
WRITE_ONCE(wq_desc->tail, ce->parallel.guc.wqi_tail); WRITE_ONCE(*ce->parallel.guc.wq_tail, ce->parallel.guc.wqi_tail);
} }
static int guc_wq_noop_append(struct intel_context *ce) static int guc_wq_noop_append(struct intel_context *ce)
{ {
struct guc_sched_wq_desc *wq_desc = __get_wq_desc(ce); u32 *wqi = get_wq_pointer(ce, wq_space_until_wrap(ce));
u32 *wqi = get_wq_pointer(wq_desc, ce, wq_space_until_wrap(ce));
u32 len_dw = wq_space_until_wrap(ce) / sizeof(u32) - 1; u32 len_dw = wq_space_until_wrap(ce) / sizeof(u32) - 1;
if (!wqi) if (!wqi)
@ -775,7 +827,6 @@ static int __guc_wq_item_append(struct i915_request *rq)
{ {
struct intel_context *ce = request_to_scheduling_context(rq); struct intel_context *ce = request_to_scheduling_context(rq);
struct intel_context *child; struct intel_context *child;
struct guc_sched_wq_desc *wq_desc = __get_wq_desc(ce);
unsigned int wqi_size = (ce->parallel.number_children + 4) * unsigned int wqi_size = (ce->parallel.number_children + 4) *
sizeof(u32); sizeof(u32);
u32 *wqi; u32 *wqi;
@ -795,7 +846,7 @@ static int __guc_wq_item_append(struct i915_request *rq)
return ret; return ret;
} }
wqi = get_wq_pointer(wq_desc, ce, wqi_size); wqi = get_wq_pointer(ce, wqi_size);
if (!wqi) if (!wqi)
return -EBUSY; return -EBUSY;
@ -810,7 +861,7 @@ static int __guc_wq_item_append(struct i915_request *rq)
for_each_child(ce, child) for_each_child(ce, child)
*wqi++ = child->ring->tail / sizeof(u64); *wqi++ = child->ring->tail / sizeof(u64);
write_wqi(wq_desc, ce, wqi_size); write_wqi(ce, wqi_size);
return 0; return 0;
} }
@ -1868,20 +1919,34 @@ static void reset_fail_worker_func(struct work_struct *w);
int intel_guc_submission_init(struct intel_guc *guc) int intel_guc_submission_init(struct intel_guc *guc)
{ {
struct intel_gt *gt = guc_to_gt(guc); struct intel_gt *gt = guc_to_gt(guc);
int ret;
if (guc->submission_initialized) if (guc->submission_initialized)
return 0; return 0;
if (guc->fw.major_ver_found < 70) {
ret = guc_lrc_desc_pool_create_v69(guc);
if (ret)
return ret;
}
guc->submission_state.guc_ids_bitmap = guc->submission_state.guc_ids_bitmap =
bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL); bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL);
if (!guc->submission_state.guc_ids_bitmap) if (!guc->submission_state.guc_ids_bitmap) {
return -ENOMEM; ret = -ENOMEM;
goto destroy_pool;
}
guc->timestamp.ping_delay = (POLL_TIME_CLKS / gt->clock_frequency + 1) * HZ; guc->timestamp.ping_delay = (POLL_TIME_CLKS / gt->clock_frequency + 1) * HZ;
guc->timestamp.shift = gpm_timestamp_shift(gt); guc->timestamp.shift = gpm_timestamp_shift(gt);
guc->submission_initialized = true; guc->submission_initialized = true;
return 0; return 0;
destroy_pool:
guc_lrc_desc_pool_destroy_v69(guc);
return ret;
} }
void intel_guc_submission_fini(struct intel_guc *guc) void intel_guc_submission_fini(struct intel_guc *guc)
@ -1890,6 +1955,7 @@ void intel_guc_submission_fini(struct intel_guc *guc)
return; return;
guc_flush_destroyed_contexts(guc); guc_flush_destroyed_contexts(guc);
guc_lrc_desc_pool_destroy_v69(guc);
i915_sched_engine_put(guc->sched_engine); i915_sched_engine_put(guc->sched_engine);
bitmap_free(guc->submission_state.guc_ids_bitmap); bitmap_free(guc->submission_state.guc_ids_bitmap);
guc->submission_initialized = false; guc->submission_initialized = false;
@ -2147,7 +2213,31 @@ static void unpin_guc_id(struct intel_guc *guc, struct intel_context *ce)
spin_unlock_irqrestore(&guc->submission_state.lock, flags); spin_unlock_irqrestore(&guc->submission_state.lock, flags);
} }
static int __guc_action_register_multi_lrc(struct intel_guc *guc, static int __guc_action_register_multi_lrc_v69(struct intel_guc *guc,
struct intel_context *ce,
u32 guc_id,
u32 offset,
bool loop)
{
struct intel_context *child;
u32 action[4 + MAX_ENGINE_INSTANCE];
int len = 0;
GEM_BUG_ON(ce->parallel.number_children > MAX_ENGINE_INSTANCE);
action[len++] = INTEL_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC;
action[len++] = guc_id;
action[len++] = ce->parallel.number_children + 1;
action[len++] = offset;
for_each_child(ce, child) {
offset += sizeof(struct guc_lrc_desc_v69);
action[len++] = offset;
}
return guc_submission_send_busy_loop(guc, action, len, 0, loop);
}
static int __guc_action_register_multi_lrc_v70(struct intel_guc *guc,
struct intel_context *ce, struct intel_context *ce,
struct guc_ctxt_registration_info *info, struct guc_ctxt_registration_info *info,
bool loop) bool loop)
@ -2190,7 +2280,22 @@ static int __guc_action_register_multi_lrc(struct intel_guc *guc,
return guc_submission_send_busy_loop(guc, action, len, 0, loop); return guc_submission_send_busy_loop(guc, action, len, 0, loop);
} }
static int __guc_action_register_context(struct intel_guc *guc, static int __guc_action_register_context_v69(struct intel_guc *guc,
u32 guc_id,
u32 offset,
bool loop)
{
u32 action[] = {
INTEL_GUC_ACTION_REGISTER_CONTEXT,
guc_id,
offset,
};
return guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action),
0, loop);
}
static int __guc_action_register_context_v70(struct intel_guc *guc,
struct guc_ctxt_registration_info *info, struct guc_ctxt_registration_info *info,
bool loop) bool loop)
{ {
@ -2213,24 +2318,52 @@ static int __guc_action_register_context(struct intel_guc *guc,
0, loop); 0, loop);
} }
static void prepare_context_registration_info(struct intel_context *ce, static void prepare_context_registration_info_v69(struct intel_context *ce);
static void prepare_context_registration_info_v70(struct intel_context *ce,
struct guc_ctxt_registration_info *info); struct guc_ctxt_registration_info *info);
static int
register_context_v69(struct intel_guc *guc, struct intel_context *ce, bool loop)
{
u32 offset = intel_guc_ggtt_offset(guc, guc->lrc_desc_pool_v69) +
ce->guc_id.id * sizeof(struct guc_lrc_desc_v69);
prepare_context_registration_info_v69(ce);
if (intel_context_is_parent(ce))
return __guc_action_register_multi_lrc_v69(guc, ce, ce->guc_id.id,
offset, loop);
else
return __guc_action_register_context_v69(guc, ce->guc_id.id,
offset, loop);
}
static int
register_context_v70(struct intel_guc *guc, struct intel_context *ce, bool loop)
{
struct guc_ctxt_registration_info info;
prepare_context_registration_info_v70(ce, &info);
if (intel_context_is_parent(ce))
return __guc_action_register_multi_lrc_v70(guc, ce, &info, loop);
else
return __guc_action_register_context_v70(guc, &info, loop);
}
static int register_context(struct intel_context *ce, bool loop) static int register_context(struct intel_context *ce, bool loop)
{ {
struct guc_ctxt_registration_info info;
struct intel_guc *guc = ce_to_guc(ce); struct intel_guc *guc = ce_to_guc(ce);
int ret; int ret;
GEM_BUG_ON(intel_context_is_child(ce)); GEM_BUG_ON(intel_context_is_child(ce));
trace_intel_context_register(ce); trace_intel_context_register(ce);
prepare_context_registration_info(ce, &info); if (guc->fw.major_ver_found >= 70)
ret = register_context_v70(guc, ce, loop);
if (intel_context_is_parent(ce))
ret = __guc_action_register_multi_lrc(guc, ce, &info, loop);
else else
ret = __guc_action_register_context(guc, &info, loop); ret = register_context_v69(guc, ce, loop);
if (likely(!ret)) { if (likely(!ret)) {
unsigned long flags; unsigned long flags;
@ -2238,7 +2371,8 @@ static int register_context(struct intel_context *ce, bool loop)
set_context_registered(ce); set_context_registered(ce);
spin_unlock_irqrestore(&ce->guc_state.lock, flags); spin_unlock_irqrestore(&ce->guc_state.lock, flags);
guc_context_policy_init(ce, loop); if (guc->fw.major_ver_found >= 70)
guc_context_policy_init_v70(ce, loop);
} }
return ret; return ret;
@ -2335,7 +2469,7 @@ static int __guc_context_set_context_policies(struct intel_guc *guc,
0, loop); 0, loop);
} }
static int guc_context_policy_init(struct intel_context *ce, bool loop) static int guc_context_policy_init_v70(struct intel_context *ce, bool loop)
{ {
struct intel_engine_cs *engine = ce->engine; struct intel_engine_cs *engine = ce->engine;
struct intel_guc *guc = &engine->gt->uc.guc; struct intel_guc *guc = &engine->gt->uc.guc;
@ -2394,7 +2528,107 @@ static int guc_context_policy_init(struct intel_context *ce, bool loop)
return ret; return ret;
} }
static void prepare_context_registration_info(struct intel_context *ce, static void guc_context_policy_init_v69(struct intel_engine_cs *engine,
struct guc_lrc_desc_v69 *desc)
{
desc->policy_flags = 0;
if (engine->flags & I915_ENGINE_WANT_FORCED_PREEMPTION)
desc->policy_flags |= CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE_V69;
/* NB: For both of these, zero means disabled. */
desc->execution_quantum = engine->props.timeslice_duration_ms * 1000;
desc->preemption_timeout = engine->props.preempt_timeout_ms * 1000;
}
static u32 map_guc_prio_to_lrc_desc_prio(u8 prio)
{
/*
* this matches the mapping we do in map_i915_prio_to_guc_prio()
* (e.g. prio < I915_PRIORITY_NORMAL maps to GUC_CLIENT_PRIORITY_NORMAL)
*/
switch (prio) {
default:
MISSING_CASE(prio);
fallthrough;
case GUC_CLIENT_PRIORITY_KMD_NORMAL:
return GEN12_CTX_PRIORITY_NORMAL;
case GUC_CLIENT_PRIORITY_NORMAL:
return GEN12_CTX_PRIORITY_LOW;
case GUC_CLIENT_PRIORITY_HIGH:
case GUC_CLIENT_PRIORITY_KMD_HIGH:
return GEN12_CTX_PRIORITY_HIGH;
}
}
static void prepare_context_registration_info_v69(struct intel_context *ce)
{
struct intel_engine_cs *engine = ce->engine;
struct intel_guc *guc = &engine->gt->uc.guc;
u32 ctx_id = ce->guc_id.id;
struct guc_lrc_desc_v69 *desc;
struct intel_context *child;
GEM_BUG_ON(!engine->mask);
/*
* Ensure LRC + CT vmas are is same region as write barrier is done
* based on CT vma region.
*/
GEM_BUG_ON(i915_gem_object_is_lmem(guc->ct.vma->obj) !=
i915_gem_object_is_lmem(ce->ring->vma->obj));
desc = __get_lrc_desc_v69(guc, ctx_id);
desc->engine_class = engine_class_to_guc_class(engine->class);
desc->engine_submit_mask = engine->logical_mask;
desc->hw_context_desc = ce->lrc.lrca;
desc->priority = ce->guc_state.prio;
desc->context_flags = CONTEXT_REGISTRATION_FLAG_KMD;
guc_context_policy_init_v69(engine, desc);
/*
* If context is a parent, we need to register a process descriptor
* describing a work queue and register all child contexts.
*/
if (intel_context_is_parent(ce)) {
struct guc_process_desc_v69 *pdesc;
ce->parallel.guc.wqi_tail = 0;
ce->parallel.guc.wqi_head = 0;
desc->process_desc = i915_ggtt_offset(ce->state) +
__get_parent_scratch_offset(ce);
desc->wq_addr = i915_ggtt_offset(ce->state) +
__get_wq_offset(ce);
desc->wq_size = WQ_SIZE;
pdesc = __get_process_desc_v69(ce);
memset(pdesc, 0, sizeof(*(pdesc)));
pdesc->stage_id = ce->guc_id.id;
pdesc->wq_base_addr = desc->wq_addr;
pdesc->wq_size_bytes = desc->wq_size;
pdesc->wq_status = WQ_STATUS_ACTIVE;
ce->parallel.guc.wq_head = &pdesc->head;
ce->parallel.guc.wq_tail = &pdesc->tail;
ce->parallel.guc.wq_status = &pdesc->wq_status;
for_each_child(ce, child) {
desc = __get_lrc_desc_v69(guc, child->guc_id.id);
desc->engine_class =
engine_class_to_guc_class(engine->class);
desc->hw_context_desc = child->lrc.lrca;
desc->priority = ce->guc_state.prio;
desc->context_flags = CONTEXT_REGISTRATION_FLAG_KMD;
guc_context_policy_init_v69(engine, desc);
}
clear_children_join_go_memory(ce);
}
}
static void prepare_context_registration_info_v70(struct intel_context *ce,
struct guc_ctxt_registration_info *info) struct guc_ctxt_registration_info *info)
{ {
struct intel_engine_cs *engine = ce->engine; struct intel_engine_cs *engine = ce->engine;
@ -2420,6 +2654,8 @@ static void prepare_context_registration_info(struct intel_context *ce,
*/ */
info->hwlrca_lo = lower_32_bits(ce->lrc.lrca); info->hwlrca_lo = lower_32_bits(ce->lrc.lrca);
info->hwlrca_hi = upper_32_bits(ce->lrc.lrca); info->hwlrca_hi = upper_32_bits(ce->lrc.lrca);
if (engine->flags & I915_ENGINE_HAS_EU_PRIORITY)
info->hwlrca_lo |= map_guc_prio_to_lrc_desc_prio(ce->guc_state.prio);
info->flags = CONTEXT_REGISTRATION_FLAG_KMD; info->flags = CONTEXT_REGISTRATION_FLAG_KMD;
/* /*
@ -2443,10 +2679,14 @@ static void prepare_context_registration_info(struct intel_context *ce,
info->wq_base_hi = upper_32_bits(wq_base_offset); info->wq_base_hi = upper_32_bits(wq_base_offset);
info->wq_size = WQ_SIZE; info->wq_size = WQ_SIZE;
wq_desc = __get_wq_desc(ce); wq_desc = __get_wq_desc_v70(ce);
memset(wq_desc, 0, sizeof(*wq_desc)); memset(wq_desc, 0, sizeof(*wq_desc));
wq_desc->wq_status = WQ_STATUS_ACTIVE; wq_desc->wq_status = WQ_STATUS_ACTIVE;
ce->parallel.guc.wq_head = &wq_desc->head;
ce->parallel.guc.wq_tail = &wq_desc->tail;
ce->parallel.guc.wq_status = &wq_desc->wq_status;
clear_children_join_go_memory(ce); clear_children_join_go_memory(ce);
} }
} }
@ -2761,11 +3001,21 @@ static void __guc_context_set_preemption_timeout(struct intel_guc *guc,
u16 guc_id, u16 guc_id,
u32 preemption_timeout) u32 preemption_timeout)
{ {
if (guc->fw.major_ver_found >= 70) {
struct context_policy policy; struct context_policy policy;
__guc_context_policy_start_klv(&policy, guc_id); __guc_context_policy_start_klv(&policy, guc_id);
__guc_context_policy_add_preemption_timeout(&policy, preemption_timeout); __guc_context_policy_add_preemption_timeout(&policy, preemption_timeout);
__guc_context_set_context_policies(guc, &policy, true); __guc_context_set_context_policies(guc, &policy, true);
} else {
u32 action[] = {
INTEL_GUC_ACTION_V69_SET_CONTEXT_PREEMPTION_TIMEOUT,
guc_id,
preemption_timeout
};
intel_guc_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true);
}
} }
static void guc_context_ban(struct intel_context *ce, struct i915_request *rq) static void guc_context_ban(struct intel_context *ce, struct i915_request *rq)
@ -3013,11 +3263,21 @@ static int guc_context_alloc(struct intel_context *ce)
static void __guc_context_set_prio(struct intel_guc *guc, static void __guc_context_set_prio(struct intel_guc *guc,
struct intel_context *ce) struct intel_context *ce)
{ {
if (guc->fw.major_ver_found >= 70) {
struct context_policy policy; struct context_policy policy;
__guc_context_policy_start_klv(&policy, ce->guc_id.id); __guc_context_policy_start_klv(&policy, ce->guc_id.id);
__guc_context_policy_add_priority(&policy, ce->guc_state.prio); __guc_context_policy_add_priority(&policy, ce->guc_state.prio);
__guc_context_set_context_policies(guc, &policy, true); __guc_context_set_context_policies(guc, &policy, true);
} else {
u32 action[] = {
INTEL_GUC_ACTION_V69_SET_CONTEXT_PRIORITY,
ce->guc_id.id,
ce->guc_state.prio,
};
guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true);
}
} }
static void guc_context_set_prio(struct intel_guc *guc, static void guc_context_set_prio(struct intel_guc *guc,
@ -4527,17 +4787,19 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc,
guc_log_context_priority(p, ce); guc_log_context_priority(p, ce);
if (intel_context_is_parent(ce)) { if (intel_context_is_parent(ce)) {
struct guc_sched_wq_desc *wq_desc = __get_wq_desc(ce);
struct intel_context *child; struct intel_context *child;
drm_printf(p, "\t\tNumber children: %u\n", drm_printf(p, "\t\tNumber children: %u\n",
ce->parallel.number_children); ce->parallel.number_children);
if (ce->parallel.guc.wq_status) {
drm_printf(p, "\t\tWQI Head: %u\n", drm_printf(p, "\t\tWQI Head: %u\n",
READ_ONCE(wq_desc->head)); READ_ONCE(*ce->parallel.guc.wq_head));
drm_printf(p, "\t\tWQI Tail: %u\n", drm_printf(p, "\t\tWQI Tail: %u\n",
READ_ONCE(wq_desc->tail)); READ_ONCE(*ce->parallel.guc.wq_tail));
drm_printf(p, "\t\tWQI Status: %u\n\n", drm_printf(p, "\t\tWQI Status: %u\n\n",
READ_ONCE(wq_desc->wq_status)); READ_ONCE(*ce->parallel.guc.wq_status));
}
if (ce->engine->emit_bb_start == if (ce->engine->emit_bb_start ==
emit_bb_start_parent_no_preempt_mid_batch) { emit_bb_start_parent_no_preempt_mid_batch) {

View File

@ -70,6 +70,10 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
fw_def(BROXTON, 0, guc_def(bxt, 70, 1, 1)) \ fw_def(BROXTON, 0, guc_def(bxt, 70, 1, 1)) \
fw_def(SKYLAKE, 0, guc_def(skl, 70, 1, 1)) fw_def(SKYLAKE, 0, guc_def(skl, 70, 1, 1))
#define INTEL_GUC_FIRMWARE_DEFS_FALLBACK(fw_def, guc_def) \
fw_def(ALDERLAKE_P, 0, guc_def(adlp, 69, 0, 3)) \
fw_def(ALDERLAKE_S, 0, guc_def(tgl, 69, 0, 3))
#define INTEL_HUC_FIRMWARE_DEFS(fw_def, huc_def) \ #define INTEL_HUC_FIRMWARE_DEFS(fw_def, huc_def) \
fw_def(ALDERLAKE_P, 0, huc_def(tgl, 7, 9, 3)) \ fw_def(ALDERLAKE_P, 0, huc_def(tgl, 7, 9, 3)) \
fw_def(ALDERLAKE_S, 0, huc_def(tgl, 7, 9, 3)) \ fw_def(ALDERLAKE_S, 0, huc_def(tgl, 7, 9, 3)) \
@ -105,6 +109,7 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
MODULE_FIRMWARE(uc_); MODULE_FIRMWARE(uc_);
INTEL_GUC_FIRMWARE_DEFS(INTEL_UC_MODULE_FW, MAKE_GUC_FW_PATH) INTEL_GUC_FIRMWARE_DEFS(INTEL_UC_MODULE_FW, MAKE_GUC_FW_PATH)
INTEL_GUC_FIRMWARE_DEFS_FALLBACK(INTEL_UC_MODULE_FW, MAKE_GUC_FW_PATH)
INTEL_HUC_FIRMWARE_DEFS(INTEL_UC_MODULE_FW, MAKE_HUC_FW_PATH) INTEL_HUC_FIRMWARE_DEFS(INTEL_UC_MODULE_FW, MAKE_HUC_FW_PATH)
/* The below structs and macros are used to iterate across the list of blobs */ /* The below structs and macros are used to iterate across the list of blobs */
@ -149,6 +154,9 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
static const struct uc_fw_platform_requirement blobs_guc[] = { static const struct uc_fw_platform_requirement blobs_guc[] = {
INTEL_GUC_FIRMWARE_DEFS(MAKE_FW_LIST, GUC_FW_BLOB) INTEL_GUC_FIRMWARE_DEFS(MAKE_FW_LIST, GUC_FW_BLOB)
}; };
static const struct uc_fw_platform_requirement blobs_guc_fallback[] = {
INTEL_GUC_FIRMWARE_DEFS_FALLBACK(MAKE_FW_LIST, GUC_FW_BLOB)
};
static const struct uc_fw_platform_requirement blobs_huc[] = { static const struct uc_fw_platform_requirement blobs_huc[] = {
INTEL_HUC_FIRMWARE_DEFS(MAKE_FW_LIST, HUC_FW_BLOB) INTEL_HUC_FIRMWARE_DEFS(MAKE_FW_LIST, HUC_FW_BLOB)
}; };
@ -179,12 +187,29 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
if (p == fw_blobs[i].p && rev >= fw_blobs[i].rev) { if (p == fw_blobs[i].p && rev >= fw_blobs[i].rev) {
const struct uc_fw_blob *blob = &fw_blobs[i].blob; const struct uc_fw_blob *blob = &fw_blobs[i].blob;
uc_fw->path = blob->path; uc_fw->path = blob->path;
uc_fw->wanted_path = blob->path;
uc_fw->major_ver_wanted = blob->major; uc_fw->major_ver_wanted = blob->major;
uc_fw->minor_ver_wanted = blob->minor; uc_fw->minor_ver_wanted = blob->minor;
break; break;
} }
} }
if (uc_fw->type == INTEL_UC_FW_TYPE_GUC) {
const struct uc_fw_platform_requirement *blobs = blobs_guc_fallback;
u32 count = ARRAY_SIZE(blobs_guc_fallback);
for (i = 0; i < count && p <= blobs[i].p; i++) {
if (p == blobs[i].p && rev >= blobs[i].rev) {
const struct uc_fw_blob *blob = &blobs[i].blob;
uc_fw->fallback.path = blob->path;
uc_fw->fallback.major_ver = blob->major;
uc_fw->fallback.minor_ver = blob->minor;
break;
}
}
}
/* make sure the list is ordered as expected */ /* make sure the list is ordered as expected */
if (IS_ENABLED(CONFIG_DRM_I915_SELFTEST)) { if (IS_ENABLED(CONFIG_DRM_I915_SELFTEST)) {
for (i = 1; i < fw_count; i++) { for (i = 1; i < fw_count; i++) {
@ -338,7 +363,24 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
__force_fw_fetch_failures(uc_fw, -EINVAL); __force_fw_fetch_failures(uc_fw, -EINVAL);
__force_fw_fetch_failures(uc_fw, -ESTALE); __force_fw_fetch_failures(uc_fw, -ESTALE);
err = request_firmware(&fw, uc_fw->path, dev); err = firmware_request_nowarn(&fw, uc_fw->path, dev);
if (err && !intel_uc_fw_is_overridden(uc_fw) && uc_fw->fallback.path) {
err = firmware_request_nowarn(&fw, uc_fw->fallback.path, dev);
if (!err) {
drm_notice(&i915->drm,
"%s firmware %s is recommended, but only %s was found\n",
intel_uc_fw_type_repr(uc_fw->type),
uc_fw->wanted_path,
uc_fw->fallback.path);
drm_info(&i915->drm,
"Consider updating your linux-firmware pkg or downloading from %s\n",
INTEL_UC_FIRMWARE_URL);
uc_fw->path = uc_fw->fallback.path;
uc_fw->major_ver_wanted = uc_fw->fallback.major_ver;
uc_fw->minor_ver_wanted = uc_fw->fallback.minor_ver;
}
}
if (err) if (err)
goto fail; goto fail;
@ -437,7 +479,7 @@ fail:
INTEL_UC_FIRMWARE_MISSING : INTEL_UC_FIRMWARE_MISSING :
INTEL_UC_FIRMWARE_ERROR); INTEL_UC_FIRMWARE_ERROR);
drm_notice(&i915->drm, "%s firmware %s: fetch failed with error %d\n", i915_probe_error(i915, "%s firmware %s: fetch failed with error %d\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, err); intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, err);
drm_info(&i915->drm, "%s firmware(s) can be downloaded from %s\n", drm_info(&i915->drm, "%s firmware(s) can be downloaded from %s\n",
intel_uc_fw_type_repr(uc_fw->type), INTEL_UC_FIRMWARE_URL); intel_uc_fw_type_repr(uc_fw->type), INTEL_UC_FIRMWARE_URL);
@ -796,7 +838,13 @@ size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len)
void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p) void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p)
{ {
drm_printf(p, "%s firmware: %s\n", drm_printf(p, "%s firmware: %s\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->path); intel_uc_fw_type_repr(uc_fw->type), uc_fw->wanted_path);
if (uc_fw->fallback.path) {
drm_printf(p, "%s firmware fallback: %s\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->fallback.path);
drm_printf(p, "fallback selected: %s\n",
str_yes_no(uc_fw->path == uc_fw->fallback.path));
}
drm_printf(p, "\tstatus: %s\n", drm_printf(p, "\tstatus: %s\n",
intel_uc_fw_status_repr(uc_fw->status)); intel_uc_fw_status_repr(uc_fw->status));
drm_printf(p, "\tversion: wanted %u.%u, found %u.%u\n", drm_printf(p, "\tversion: wanted %u.%u, found %u.%u\n",

View File

@ -74,6 +74,7 @@ struct intel_uc_fw {
const enum intel_uc_fw_status status; const enum intel_uc_fw_status status;
enum intel_uc_fw_status __status; /* no accidental overwrites */ enum intel_uc_fw_status __status; /* no accidental overwrites */
}; };
const char *wanted_path;
const char *path; const char *path;
bool user_overridden; bool user_overridden;
size_t size; size_t size;
@ -98,6 +99,12 @@ struct intel_uc_fw {
u16 major_ver_found; u16 major_ver_found;
u16 minor_ver_found; u16 minor_ver_found;
struct {
const char *path;
u16 major_ver;
u16 minor_ver;
} fallback;
u32 rsa_size; u32 rsa_size;
u32 ucode_size; u32 ucode_size;

View File

@ -207,6 +207,7 @@ struct dcss_dev *dcss_dev_create(struct device *dev, bool hdmi_output)
ret = dcss_submodules_init(dcss); ret = dcss_submodules_init(dcss);
if (ret) { if (ret) {
of_node_put(dcss->of_port);
dev_err(dev, "submodules initialization failed\n"); dev_err(dev, "submodules initialization failed\n");
goto clks_err; goto clks_err;
} }
@ -237,6 +238,8 @@ void dcss_dev_destroy(struct dcss_dev *dcss)
dcss_clocks_disable(dcss); dcss_clocks_disable(dcss);
} }
of_node_put(dcss->of_port);
pm_runtime_disable(dcss->dev); pm_runtime_disable(dcss->dev);
dcss_submodules_stop(dcss); dcss_submodules_stop(dcss);

View File

@ -713,7 +713,7 @@ static int generic_edp_panel_probe(struct device *dev, struct panel_edp *panel)
of_property_read_u32(dev->of_node, "hpd-reliable-delay-ms", &reliable_ms); of_property_read_u32(dev->of_node, "hpd-reliable-delay-ms", &reliable_ms);
desc->delay.hpd_reliable = reliable_ms; desc->delay.hpd_reliable = reliable_ms;
of_property_read_u32(dev->of_node, "hpd-absent-delay-ms", &absent_ms); of_property_read_u32(dev->of_node, "hpd-absent-delay-ms", &absent_ms);
desc->delay.hpd_reliable = absent_ms; desc->delay.hpd_absent = absent_ms;
/* Power the panel on so we can read the EDID */ /* Power the panel on so we can read the EDID */
ret = pm_runtime_get_sync(dev); ret = pm_runtime_get_sync(dev);

View File

@ -190,7 +190,7 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
} }
EXPORT_SYMBOL(drm_sched_entity_flush); EXPORT_SYMBOL(drm_sched_entity_flush);
static void drm_sched_entity_kill_jobs_irq_work(struct irq_work *wrk) static void drm_sched_entity_kill_jobs_work(struct work_struct *wrk)
{ {
struct drm_sched_job *job = container_of(wrk, typeof(*job), work); struct drm_sched_job *job = container_of(wrk, typeof(*job), work);
@ -207,8 +207,8 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
struct drm_sched_job *job = container_of(cb, struct drm_sched_job, struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
finish_cb); finish_cb);
init_irq_work(&job->work, drm_sched_entity_kill_jobs_irq_work); INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work);
irq_work_queue(&job->work); schedule_work(&job->work);
} }
static struct dma_fence * static struct dma_fence *

View File

@ -388,9 +388,9 @@ static irqreturn_t cdns_i2c_slave_isr(void *ptr)
*/ */
static irqreturn_t cdns_i2c_master_isr(void *ptr) static irqreturn_t cdns_i2c_master_isr(void *ptr)
{ {
unsigned int isr_status, avail_bytes, updatetx; unsigned int isr_status, avail_bytes;
unsigned int bytes_to_send; unsigned int bytes_to_send;
bool hold_quirk; bool updatetx;
struct cdns_i2c *id = ptr; struct cdns_i2c *id = ptr;
/* Signal completion only after everything is updated */ /* Signal completion only after everything is updated */
int done_flag = 0; int done_flag = 0;
@ -410,11 +410,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
* Check if transfer size register needs to be updated again for a * Check if transfer size register needs to be updated again for a
* large data receive operation. * large data receive operation.
*/ */
updatetx = 0; updatetx = id->recv_count > id->curr_recv_count;
if (id->recv_count > id->curr_recv_count)
updatetx = 1;
hold_quirk = (id->quirks & CDNS_I2C_BROKEN_HOLD_BIT) && updatetx;
/* When receiving, handle data interrupt and completion interrupt */ /* When receiving, handle data interrupt and completion interrupt */
if (id->p_recv_buf && if (id->p_recv_buf &&
@ -445,7 +441,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
break; break;
} }
if (cdns_is_holdquirk(id, hold_quirk)) if (cdns_is_holdquirk(id, updatetx))
break; break;
} }
@ -456,7 +452,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
* maintain transfer size non-zero while performing a large * maintain transfer size non-zero while performing a large
* receive operation. * receive operation.
*/ */
if (cdns_is_holdquirk(id, hold_quirk)) { if (cdns_is_holdquirk(id, updatetx)) {
/* wait while fifo is full */ /* wait while fifo is full */
while (cdns_i2c_readreg(CDNS_I2C_XFER_SIZE_OFFSET) != while (cdns_i2c_readreg(CDNS_I2C_XFER_SIZE_OFFSET) !=
(id->curr_recv_count - CDNS_I2C_FIFO_DEPTH)) (id->curr_recv_count - CDNS_I2C_FIFO_DEPTH))
@ -478,22 +474,6 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr)
CDNS_I2C_XFER_SIZE_OFFSET); CDNS_I2C_XFER_SIZE_OFFSET);
id->curr_recv_count = id->recv_count; id->curr_recv_count = id->recv_count;
} }
} else if (id->recv_count && !hold_quirk &&
!id->curr_recv_count) {
/* Set the slave address in address register*/
cdns_i2c_writereg(id->p_msg->addr & CDNS_I2C_ADDR_MASK,
CDNS_I2C_ADDR_OFFSET);
if (id->recv_count > CDNS_I2C_TRANSFER_SIZE) {
cdns_i2c_writereg(CDNS_I2C_TRANSFER_SIZE,
CDNS_I2C_XFER_SIZE_OFFSET);
id->curr_recv_count = CDNS_I2C_TRANSFER_SIZE;
} else {
cdns_i2c_writereg(id->recv_count,
CDNS_I2C_XFER_SIZE_OFFSET);
id->curr_recv_count = id->recv_count;
}
} }
/* Clear hold (if not repeated start) and signal completion */ /* Clear hold (if not repeated start) and signal completion */

View File

@ -66,7 +66,7 @@
/* IMX I2C registers: /* IMX I2C registers:
* the I2C register offset is different between SoCs, * the I2C register offset is different between SoCs,
* to provid support for all these chips, split the * to provide support for all these chips, split the
* register offset into a fixed base address and a * register offset into a fixed base address and a
* variable shift value, then the full register offset * variable shift value, then the full register offset
* will be calculated by * will be calculated by

View File

@ -49,7 +49,7 @@
#define MLXCPLD_LPCI2C_NACK_IND 2 #define MLXCPLD_LPCI2C_NACK_IND 2
#define MLXCPLD_I2C_FREQ_1000KHZ_SET 0x04 #define MLXCPLD_I2C_FREQ_1000KHZ_SET 0x04
#define MLXCPLD_I2C_FREQ_400KHZ_SET 0x0c #define MLXCPLD_I2C_FREQ_400KHZ_SET 0x0e
#define MLXCPLD_I2C_FREQ_100KHZ_SET 0x42 #define MLXCPLD_I2C_FREQ_100KHZ_SET 0x42
enum mlxcpld_i2c_frequency { enum mlxcpld_i2c_frequency {

View File

@ -7304,7 +7304,9 @@ static struct r5conf *setup_conf(struct mddev *mddev)
goto abort; goto abort;
conf->mddev = mddev; conf->mddev = mddev;
if ((conf->stripe_hashtbl = kzalloc(PAGE_SIZE, GFP_KERNEL)) == NULL) ret = -ENOMEM;
conf->stripe_hashtbl = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (!conf->stripe_hashtbl)
goto abort; goto abort;
/* We init hash_locks[0] separately to that it can be used /* We init hash_locks[0] separately to that it can be used

View File

@ -13,10 +13,13 @@ lkdtm-$(CONFIG_LKDTM) += cfi.o
lkdtm-$(CONFIG_LKDTM) += fortify.o lkdtm-$(CONFIG_LKDTM) += fortify.o
lkdtm-$(CONFIG_PPC_64S_HASH_MMU) += powerpc.o lkdtm-$(CONFIG_PPC_64S_HASH_MMU) += powerpc.o
KASAN_SANITIZE_rodata.o := n
KASAN_SANITIZE_stackleak.o := n KASAN_SANITIZE_stackleak.o := n
KASAN_SANITIZE_rodata.o := n
KCSAN_SANITIZE_rodata.o := n
KCOV_INSTRUMENT_rodata.o := n KCOV_INSTRUMENT_rodata.o := n
CFLAGS_REMOVE_rodata.o += $(CC_FLAGS_LTO) OBJECT_FILES_NON_STANDARD_rodata.o := y
CFLAGS_REMOVE_rodata.o += $(CC_FLAGS_LTO) $(RETHUNK_CFLAGS)
OBJCOPYFLAGS := OBJCOPYFLAGS :=
OBJCOPYFLAGS_rodata_objcopy.o := \ OBJCOPYFLAGS_rodata_objcopy.o := \

View File

@ -1298,8 +1298,9 @@ static int sdhci_omap_probe(struct platform_device *pdev)
/* /*
* omap_device_pm_domain has callbacks to enable the main * omap_device_pm_domain has callbacks to enable the main
* functional clock, interface clock and also configure the * functional clock, interface clock and also configure the
* SYSCONFIG register of omap devices. The callback will be invoked * SYSCONFIG register to clear any boot loader set voltage
* as part of pm_runtime_get_sync. * capabilities before calling sdhci_setup_host(). The
* callback will be invoked as part of pm_runtime_get_sync.
*/ */
pm_runtime_use_autosuspend(dev); pm_runtime_use_autosuspend(dev);
pm_runtime_set_autosuspend_delay(dev, 50); pm_runtime_set_autosuspend_delay(dev, 50);
@ -1441,6 +1442,7 @@ static int __maybe_unused sdhci_omap_runtime_suspend(struct device *dev)
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host);
if (omap_host->con != -EINVAL)
sdhci_runtime_suspend_host(host); sdhci_runtime_suspend_host(host);
sdhci_omap_context_save(omap_host); sdhci_omap_context_save(omap_host);
@ -1458,10 +1460,10 @@ static int __maybe_unused sdhci_omap_runtime_resume(struct device *dev)
pinctrl_pm_select_default_state(dev); pinctrl_pm_select_default_state(dev);
if (omap_host->con != -EINVAL) if (omap_host->con != -EINVAL) {
sdhci_omap_context_restore(omap_host); sdhci_omap_context_restore(omap_host);
sdhci_runtime_resume_host(host, 0); sdhci_runtime_resume_host(host, 0);
}
return 0; return 0;
} }

View File

@ -850,9 +850,10 @@ static int gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
unsigned int tRP_ps; unsigned int tRP_ps;
bool use_half_period; bool use_half_period;
int sample_delay_ps, sample_delay_factor; int sample_delay_ps, sample_delay_factor;
u16 busy_timeout_cycles; unsigned int busy_timeout_cycles;
u8 wrn_dly_sel; u8 wrn_dly_sel;
unsigned long clk_rate, min_rate; unsigned long clk_rate, min_rate;
u64 busy_timeout_ps;
if (sdr->tRC_min >= 30000) { if (sdr->tRC_min >= 30000) {
/* ONFI non-EDO modes [0-3] */ /* ONFI non-EDO modes [0-3] */
@ -885,7 +886,8 @@ static int gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
addr_setup_cycles = TO_CYCLES(sdr->tALS_min, period_ps); addr_setup_cycles = TO_CYCLES(sdr->tALS_min, period_ps);
data_setup_cycles = TO_CYCLES(sdr->tDS_min, period_ps); data_setup_cycles = TO_CYCLES(sdr->tDS_min, period_ps);
data_hold_cycles = TO_CYCLES(sdr->tDH_min, period_ps); data_hold_cycles = TO_CYCLES(sdr->tDH_min, period_ps);
busy_timeout_cycles = TO_CYCLES(sdr->tWB_max + sdr->tR_max, period_ps); busy_timeout_ps = max(sdr->tBERS_max, sdr->tPROG_max);
busy_timeout_cycles = TO_CYCLES(busy_timeout_ps, period_ps);
hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) | hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) |
BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) | BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) |

View File

@ -563,7 +563,7 @@ static struct sk_buff *amt_build_igmp_gq(struct amt_dev *amt)
ihv3->nsrcs = 0; ihv3->nsrcs = 0;
ihv3->resv = 0; ihv3->resv = 0;
ihv3->suppress = false; ihv3->suppress = false;
ihv3->qrv = amt->net->ipv4.sysctl_igmp_qrv; ihv3->qrv = READ_ONCE(amt->net->ipv4.sysctl_igmp_qrv);
ihv3->csum = 0; ihv3->csum = 0;
csum = &ihv3->csum; csum = &ihv3->csum;
csum_start = (void *)ihv3; csum_start = (void *)ihv3;
@ -577,14 +577,14 @@ static struct sk_buff *amt_build_igmp_gq(struct amt_dev *amt)
return skb; return skb;
} }
static void __amt_update_gw_status(struct amt_dev *amt, enum amt_status status, static void amt_update_gw_status(struct amt_dev *amt, enum amt_status status,
bool validate) bool validate)
{ {
if (validate && amt->status >= status) if (validate && amt->status >= status)
return; return;
netdev_dbg(amt->dev, "Update GW status %s -> %s", netdev_dbg(amt->dev, "Update GW status %s -> %s",
status_str[amt->status], status_str[status]); status_str[amt->status], status_str[status]);
amt->status = status; WRITE_ONCE(amt->status, status);
} }
static void __amt_update_relay_status(struct amt_tunnel_list *tunnel, static void __amt_update_relay_status(struct amt_tunnel_list *tunnel,
@ -600,14 +600,6 @@ static void __amt_update_relay_status(struct amt_tunnel_list *tunnel,
tunnel->status = status; tunnel->status = status;
} }
static void amt_update_gw_status(struct amt_dev *amt, enum amt_status status,
bool validate)
{
spin_lock_bh(&amt->lock);
__amt_update_gw_status(amt, status, validate);
spin_unlock_bh(&amt->lock);
}
static void amt_update_relay_status(struct amt_tunnel_list *tunnel, static void amt_update_relay_status(struct amt_tunnel_list *tunnel,
enum amt_status status, bool validate) enum amt_status status, bool validate)
{ {
@ -700,9 +692,7 @@ static void amt_send_discovery(struct amt_dev *amt)
if (unlikely(net_xmit_eval(err))) if (unlikely(net_xmit_eval(err)))
amt->dev->stats.tx_errors++; amt->dev->stats.tx_errors++;
spin_lock_bh(&amt->lock); amt_update_gw_status(amt, AMT_STATUS_SENT_DISCOVERY, true);
__amt_update_gw_status(amt, AMT_STATUS_SENT_DISCOVERY, true);
spin_unlock_bh(&amt->lock);
out: out:
rcu_read_unlock(); rcu_read_unlock();
} }
@ -900,6 +890,28 @@ static void amt_send_mld_gq(struct amt_dev *amt, struct amt_tunnel_list *tunnel)
} }
#endif #endif
static bool amt_queue_event(struct amt_dev *amt, enum amt_event event,
struct sk_buff *skb)
{
int index;
spin_lock_bh(&amt->lock);
if (amt->nr_events >= AMT_MAX_EVENTS) {
spin_unlock_bh(&amt->lock);
return 1;
}
index = (amt->event_idx + amt->nr_events) % AMT_MAX_EVENTS;
amt->events[index].event = event;
amt->events[index].skb = skb;
amt->nr_events++;
amt->event_idx %= AMT_MAX_EVENTS;
queue_work(amt_wq, &amt->event_wq);
spin_unlock_bh(&amt->lock);
return 0;
}
static void amt_secret_work(struct work_struct *work) static void amt_secret_work(struct work_struct *work)
{ {
struct amt_dev *amt = container_of(to_delayed_work(work), struct amt_dev *amt = container_of(to_delayed_work(work),
@ -913,24 +925,61 @@ static void amt_secret_work(struct work_struct *work)
msecs_to_jiffies(AMT_SECRET_TIMEOUT)); msecs_to_jiffies(AMT_SECRET_TIMEOUT));
} }
static void amt_event_send_discovery(struct amt_dev *amt)
{
if (amt->status > AMT_STATUS_SENT_DISCOVERY)
goto out;
get_random_bytes(&amt->nonce, sizeof(__be32));
amt_send_discovery(amt);
out:
mod_delayed_work(amt_wq, &amt->discovery_wq,
msecs_to_jiffies(AMT_DISCOVERY_TIMEOUT));
}
static void amt_discovery_work(struct work_struct *work) static void amt_discovery_work(struct work_struct *work)
{ {
struct amt_dev *amt = container_of(to_delayed_work(work), struct amt_dev *amt = container_of(to_delayed_work(work),
struct amt_dev, struct amt_dev,
discovery_wq); discovery_wq);
spin_lock_bh(&amt->lock); if (amt_queue_event(amt, AMT_EVENT_SEND_DISCOVERY, NULL))
if (amt->status > AMT_STATUS_SENT_DISCOVERY)
goto out;
get_random_bytes(&amt->nonce, sizeof(__be32));
spin_unlock_bh(&amt->lock);
amt_send_discovery(amt);
spin_lock_bh(&amt->lock);
out:
mod_delayed_work(amt_wq, &amt->discovery_wq, mod_delayed_work(amt_wq, &amt->discovery_wq,
msecs_to_jiffies(AMT_DISCOVERY_TIMEOUT)); msecs_to_jiffies(AMT_DISCOVERY_TIMEOUT));
spin_unlock_bh(&amt->lock); }
static void amt_event_send_request(struct amt_dev *amt)
{
u32 exp;
if (amt->status < AMT_STATUS_RECEIVED_ADVERTISEMENT)
goto out;
if (amt->req_cnt > AMT_MAX_REQ_COUNT) {
netdev_dbg(amt->dev, "Gateway is not ready");
amt->qi = AMT_INIT_REQ_TIMEOUT;
WRITE_ONCE(amt->ready4, false);
WRITE_ONCE(amt->ready6, false);
amt->remote_ip = 0;
amt_update_gw_status(amt, AMT_STATUS_INIT, false);
amt->req_cnt = 0;
amt->nonce = 0;
goto out;
}
if (!amt->req_cnt) {
WRITE_ONCE(amt->ready4, false);
WRITE_ONCE(amt->ready6, false);
get_random_bytes(&amt->nonce, sizeof(__be32));
}
amt_send_request(amt, false);
amt_send_request(amt, true);
amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
amt->req_cnt++;
out:
exp = min_t(u32, (1 * (1 << amt->req_cnt)), AMT_MAX_REQ_TIMEOUT);
mod_delayed_work(amt_wq, &amt->req_wq, msecs_to_jiffies(exp * 1000));
} }
static void amt_req_work(struct work_struct *work) static void amt_req_work(struct work_struct *work)
@ -938,33 +987,10 @@ static void amt_req_work(struct work_struct *work)
struct amt_dev *amt = container_of(to_delayed_work(work), struct amt_dev *amt = container_of(to_delayed_work(work),
struct amt_dev, struct amt_dev,
req_wq); req_wq);
u32 exp;
spin_lock_bh(&amt->lock); if (amt_queue_event(amt, AMT_EVENT_SEND_REQUEST, NULL))
if (amt->status < AMT_STATUS_RECEIVED_ADVERTISEMENT) mod_delayed_work(amt_wq, &amt->req_wq,
goto out; msecs_to_jiffies(100));
if (amt->req_cnt > AMT_MAX_REQ_COUNT) {
netdev_dbg(amt->dev, "Gateway is not ready");
amt->qi = AMT_INIT_REQ_TIMEOUT;
amt->ready4 = false;
amt->ready6 = false;
amt->remote_ip = 0;
__amt_update_gw_status(amt, AMT_STATUS_INIT, false);
amt->req_cnt = 0;
goto out;
}
spin_unlock_bh(&amt->lock);
amt_send_request(amt, false);
amt_send_request(amt, true);
spin_lock_bh(&amt->lock);
__amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
amt->req_cnt++;
out:
exp = min_t(u32, (1 * (1 << amt->req_cnt)), AMT_MAX_REQ_TIMEOUT);
mod_delayed_work(amt_wq, &amt->req_wq, msecs_to_jiffies(exp * 1000));
spin_unlock_bh(&amt->lock);
} }
static bool amt_send_membership_update(struct amt_dev *amt, static bool amt_send_membership_update(struct amt_dev *amt,
@ -1220,7 +1246,8 @@ static netdev_tx_t amt_dev_xmit(struct sk_buff *skb, struct net_device *dev)
/* Gateway only passes IGMP/MLD packets */ /* Gateway only passes IGMP/MLD packets */
if (!report) if (!report)
goto free; goto free;
if ((!v6 && !amt->ready4) || (v6 && !amt->ready6)) if ((!v6 && !READ_ONCE(amt->ready4)) ||
(v6 && !READ_ONCE(amt->ready6)))
goto free; goto free;
if (amt_send_membership_update(amt, skb, v6)) if (amt_send_membership_update(amt, skb, v6))
goto free; goto free;
@ -2236,6 +2263,10 @@ static bool amt_advertisement_handler(struct amt_dev *amt, struct sk_buff *skb)
ipv4_is_zeronet(amta->ip4)) ipv4_is_zeronet(amta->ip4))
return true; return true;
if (amt->status != AMT_STATUS_SENT_DISCOVERY ||
amt->nonce != amta->nonce)
return true;
amt->remote_ip = amta->ip4; amt->remote_ip = amta->ip4;
netdev_dbg(amt->dev, "advertised remote ip = %pI4\n", &amt->remote_ip); netdev_dbg(amt->dev, "advertised remote ip = %pI4\n", &amt->remote_ip);
mod_delayed_work(amt_wq, &amt->req_wq, 0); mod_delayed_work(amt_wq, &amt->req_wq, 0);
@ -2251,6 +2282,9 @@ static bool amt_multicast_data_handler(struct amt_dev *amt, struct sk_buff *skb)
struct ethhdr *eth; struct ethhdr *eth;
struct iphdr *iph; struct iphdr *iph;
if (READ_ONCE(amt->status) != AMT_STATUS_SENT_UPDATE)
return true;
hdr_size = sizeof(*amtmd) + sizeof(struct udphdr); hdr_size = sizeof(*amtmd) + sizeof(struct udphdr);
if (!pskb_may_pull(skb, hdr_size)) if (!pskb_may_pull(skb, hdr_size))
return true; return true;
@ -2325,6 +2359,9 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
if (amtmq->reserved || amtmq->version) if (amtmq->reserved || amtmq->version)
return true; return true;
if (amtmq->nonce != amt->nonce)
return true;
hdr_size -= sizeof(*eth); hdr_size -= sizeof(*eth);
if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_TEB), false)) if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_TEB), false))
return true; return true;
@ -2339,6 +2376,9 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
iph = ip_hdr(skb); iph = ip_hdr(skb);
if (iph->version == 4) { if (iph->version == 4) {
if (READ_ONCE(amt->ready4))
return true;
if (!pskb_may_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS + if (!pskb_may_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS +
sizeof(*ihv3))) sizeof(*ihv3)))
return true; return true;
@ -2349,12 +2389,10 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
ihv3 = skb_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS); ihv3 = skb_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
skb_reset_transport_header(skb); skb_reset_transport_header(skb);
skb_push(skb, sizeof(*iph) + AMT_IPHDR_OPTS); skb_push(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
spin_lock_bh(&amt->lock); WRITE_ONCE(amt->ready4, true);
amt->ready4 = true;
amt->mac = amtmq->response_mac; amt->mac = amtmq->response_mac;
amt->req_cnt = 0; amt->req_cnt = 0;
amt->qi = ihv3->qqic; amt->qi = ihv3->qqic;
spin_unlock_bh(&amt->lock);
skb->protocol = htons(ETH_P_IP); skb->protocol = htons(ETH_P_IP);
eth->h_proto = htons(ETH_P_IP); eth->h_proto = htons(ETH_P_IP);
ip_eth_mc_map(iph->daddr, eth->h_dest); ip_eth_mc_map(iph->daddr, eth->h_dest);
@ -2363,6 +2401,9 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
struct mld2_query *mld2q; struct mld2_query *mld2q;
struct ipv6hdr *ip6h; struct ipv6hdr *ip6h;
if (READ_ONCE(amt->ready6))
return true;
if (!pskb_may_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS + if (!pskb_may_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS +
sizeof(*mld2q))) sizeof(*mld2q)))
return true; return true;
@ -2374,12 +2415,10 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
mld2q = skb_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS); mld2q = skb_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
skb_reset_transport_header(skb); skb_reset_transport_header(skb);
skb_push(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS); skb_push(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
spin_lock_bh(&amt->lock); WRITE_ONCE(amt->ready6, true);
amt->ready6 = true;
amt->mac = amtmq->response_mac; amt->mac = amtmq->response_mac;
amt->req_cnt = 0; amt->req_cnt = 0;
amt->qi = mld2q->mld2q_qqic; amt->qi = mld2q->mld2q_qqic;
spin_unlock_bh(&amt->lock);
skb->protocol = htons(ETH_P_IPV6); skb->protocol = htons(ETH_P_IPV6);
eth->h_proto = htons(ETH_P_IPV6); eth->h_proto = htons(ETH_P_IPV6);
ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest); ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest);
@ -2392,12 +2431,14 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
skb->pkt_type = PACKET_MULTICAST; skb->pkt_type = PACKET_MULTICAST;
skb->ip_summed = CHECKSUM_NONE; skb->ip_summed = CHECKSUM_NONE;
len = skb->len; len = skb->len;
local_bh_disable();
if (__netif_rx(skb) == NET_RX_SUCCESS) { if (__netif_rx(skb) == NET_RX_SUCCESS) {
amt_update_gw_status(amt, AMT_STATUS_RECEIVED_QUERY, true); amt_update_gw_status(amt, AMT_STATUS_RECEIVED_QUERY, true);
dev_sw_netstats_rx_add(amt->dev, len); dev_sw_netstats_rx_add(amt->dev, len);
} else { } else {
amt->dev->stats.rx_dropped++; amt->dev->stats.rx_dropped++;
} }
local_bh_enable();
return false; return false;
} }
@ -2638,7 +2679,9 @@ static bool amt_request_handler(struct amt_dev *amt, struct sk_buff *skb)
if (tunnel->ip4 == iph->saddr) if (tunnel->ip4 == iph->saddr)
goto send; goto send;
spin_lock_bh(&amt->lock);
if (amt->nr_tunnels >= amt->max_tunnels) { if (amt->nr_tunnels >= amt->max_tunnels) {
spin_unlock_bh(&amt->lock);
icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0); icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
return true; return true;
} }
@ -2646,8 +2689,10 @@ static bool amt_request_handler(struct amt_dev *amt, struct sk_buff *skb)
tunnel = kzalloc(sizeof(*tunnel) + tunnel = kzalloc(sizeof(*tunnel) +
(sizeof(struct hlist_head) * amt->hash_buckets), (sizeof(struct hlist_head) * amt->hash_buckets),
GFP_ATOMIC); GFP_ATOMIC);
if (!tunnel) if (!tunnel) {
spin_unlock_bh(&amt->lock);
return true; return true;
}
tunnel->source_port = udph->source; tunnel->source_port = udph->source;
tunnel->ip4 = iph->saddr; tunnel->ip4 = iph->saddr;
@ -2660,10 +2705,9 @@ static bool amt_request_handler(struct amt_dev *amt, struct sk_buff *skb)
INIT_DELAYED_WORK(&tunnel->gc_wq, amt_tunnel_expire); INIT_DELAYED_WORK(&tunnel->gc_wq, amt_tunnel_expire);
spin_lock_bh(&amt->lock);
list_add_tail_rcu(&tunnel->list, &amt->tunnel_list); list_add_tail_rcu(&tunnel->list, &amt->tunnel_list);
tunnel->key = amt->key; tunnel->key = amt->key;
amt_update_relay_status(tunnel, AMT_STATUS_RECEIVED_REQUEST, true); __amt_update_relay_status(tunnel, AMT_STATUS_RECEIVED_REQUEST, true);
amt->nr_tunnels++; amt->nr_tunnels++;
mod_delayed_work(amt_wq, &tunnel->gc_wq, mod_delayed_work(amt_wq, &tunnel->gc_wq,
msecs_to_jiffies(amt_gmi(amt))); msecs_to_jiffies(amt_gmi(amt)));
@ -2688,6 +2732,38 @@ send:
return false; return false;
} }
static void amt_gw_rcv(struct amt_dev *amt, struct sk_buff *skb)
{
int type = amt_parse_type(skb);
int err = 1;
if (type == -1)
goto drop;
if (amt->mode == AMT_MODE_GATEWAY) {
switch (type) {
case AMT_MSG_ADVERTISEMENT:
err = amt_advertisement_handler(amt, skb);
break;
case AMT_MSG_MEMBERSHIP_QUERY:
err = amt_membership_query_handler(amt, skb);
if (!err)
return;
break;
default:
netdev_dbg(amt->dev, "Invalid type of Gateway\n");
break;
}
}
drop:
if (err) {
amt->dev->stats.rx_dropped++;
kfree_skb(skb);
} else {
consume_skb(skb);
}
}
static int amt_rcv(struct sock *sk, struct sk_buff *skb) static int amt_rcv(struct sock *sk, struct sk_buff *skb)
{ {
struct amt_dev *amt; struct amt_dev *amt;
@ -2719,8 +2795,12 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
err = true; err = true;
goto drop; goto drop;
} }
err = amt_advertisement_handler(amt, skb); if (amt_queue_event(amt, AMT_EVENT_RECEIVE, skb)) {
break; netdev_dbg(amt->dev, "AMT Event queue full\n");
err = true;
goto drop;
}
goto out;
case AMT_MSG_MULTICAST_DATA: case AMT_MSG_MULTICAST_DATA:
if (iph->saddr != amt->remote_ip) { if (iph->saddr != amt->remote_ip) {
netdev_dbg(amt->dev, "Invalid Relay IP\n"); netdev_dbg(amt->dev, "Invalid Relay IP\n");
@ -2738,10 +2818,11 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
err = true; err = true;
goto drop; goto drop;
} }
err = amt_membership_query_handler(amt, skb); if (amt_queue_event(amt, AMT_EVENT_RECEIVE, skb)) {
if (err) netdev_dbg(amt->dev, "AMT Event queue full\n");
err = true;
goto drop; goto drop;
else }
goto out; goto out;
default: default:
err = true; err = true;
@ -2780,6 +2861,46 @@ out:
return 0; return 0;
} }
static void amt_event_work(struct work_struct *work)
{
struct amt_dev *amt = container_of(work, struct amt_dev, event_wq);
struct sk_buff *skb;
u8 event;
int i;
for (i = 0; i < AMT_MAX_EVENTS; i++) {
spin_lock_bh(&amt->lock);
if (amt->nr_events == 0) {
spin_unlock_bh(&amt->lock);
return;
}
event = amt->events[amt->event_idx].event;
skb = amt->events[amt->event_idx].skb;
amt->events[amt->event_idx].event = AMT_EVENT_NONE;
amt->events[amt->event_idx].skb = NULL;
amt->nr_events--;
amt->event_idx++;
amt->event_idx %= AMT_MAX_EVENTS;
spin_unlock_bh(&amt->lock);
switch (event) {
case AMT_EVENT_RECEIVE:
amt_gw_rcv(amt, skb);
break;
case AMT_EVENT_SEND_DISCOVERY:
amt_event_send_discovery(amt);
break;
case AMT_EVENT_SEND_REQUEST:
amt_event_send_request(amt);
break;
default:
if (skb)
kfree_skb(skb);
break;
}
}
}
static int amt_err_lookup(struct sock *sk, struct sk_buff *skb) static int amt_err_lookup(struct sock *sk, struct sk_buff *skb)
{ {
struct amt_dev *amt; struct amt_dev *amt;
@ -2804,7 +2925,7 @@ static int amt_err_lookup(struct sock *sk, struct sk_buff *skb)
break; break;
case AMT_MSG_REQUEST: case AMT_MSG_REQUEST:
case AMT_MSG_MEMBERSHIP_UPDATE: case AMT_MSG_MEMBERSHIP_UPDATE:
if (amt->status >= AMT_STATUS_RECEIVED_ADVERTISEMENT) if (READ_ONCE(amt->status) >= AMT_STATUS_RECEIVED_ADVERTISEMENT)
mod_delayed_work(amt_wq, &amt->req_wq, 0); mod_delayed_work(amt_wq, &amt->req_wq, 0);
break; break;
default: default:
@ -2867,6 +2988,8 @@ static int amt_dev_open(struct net_device *dev)
amt->ready4 = false; amt->ready4 = false;
amt->ready6 = false; amt->ready6 = false;
amt->event_idx = 0;
amt->nr_events = 0;
err = amt_socket_create(amt); err = amt_socket_create(amt);
if (err) if (err)
@ -2874,6 +2997,7 @@ static int amt_dev_open(struct net_device *dev)
amt->req_cnt = 0; amt->req_cnt = 0;
amt->remote_ip = 0; amt->remote_ip = 0;
amt->nonce = 0;
get_random_bytes(&amt->key, sizeof(siphash_key_t)); get_random_bytes(&amt->key, sizeof(siphash_key_t));
amt->status = AMT_STATUS_INIT; amt->status = AMT_STATUS_INIT;
@ -2892,6 +3016,8 @@ static int amt_dev_stop(struct net_device *dev)
struct amt_dev *amt = netdev_priv(dev); struct amt_dev *amt = netdev_priv(dev);
struct amt_tunnel_list *tunnel, *tmp; struct amt_tunnel_list *tunnel, *tmp;
struct socket *sock; struct socket *sock;
struct sk_buff *skb;
int i;
cancel_delayed_work_sync(&amt->req_wq); cancel_delayed_work_sync(&amt->req_wq);
cancel_delayed_work_sync(&amt->discovery_wq); cancel_delayed_work_sync(&amt->discovery_wq);
@ -2904,6 +3030,15 @@ static int amt_dev_stop(struct net_device *dev)
if (sock) if (sock)
udp_tunnel_sock_release(sock); udp_tunnel_sock_release(sock);
cancel_work_sync(&amt->event_wq);
for (i = 0; i < AMT_MAX_EVENTS; i++) {
skb = amt->events[i].skb;
if (skb)
kfree_skb(skb);
amt->events[i].event = AMT_EVENT_NONE;
amt->events[i].skb = NULL;
}
amt->ready4 = false; amt->ready4 = false;
amt->ready6 = false; amt->ready6 = false;
amt->req_cnt = 0; amt->req_cnt = 0;
@ -3095,7 +3230,7 @@ static int amt_newlink(struct net *net, struct net_device *dev,
goto err; goto err;
} }
if (amt->mode == AMT_MODE_RELAY) { if (amt->mode == AMT_MODE_RELAY) {
amt->qrv = amt->net->ipv4.sysctl_igmp_qrv; amt->qrv = READ_ONCE(amt->net->ipv4.sysctl_igmp_qrv);
amt->qri = 10; amt->qri = 10;
dev->needed_headroom = amt->stream_dev->needed_headroom + dev->needed_headroom = amt->stream_dev->needed_headroom +
AMT_RELAY_HLEN; AMT_RELAY_HLEN;
@ -3146,8 +3281,8 @@ static int amt_newlink(struct net *net, struct net_device *dev,
INIT_DELAYED_WORK(&amt->discovery_wq, amt_discovery_work); INIT_DELAYED_WORK(&amt->discovery_wq, amt_discovery_work);
INIT_DELAYED_WORK(&amt->req_wq, amt_req_work); INIT_DELAYED_WORK(&amt->req_wq, amt_req_work);
INIT_DELAYED_WORK(&amt->secret_wq, amt_secret_work); INIT_DELAYED_WORK(&amt->secret_wq, amt_secret_work);
INIT_WORK(&amt->event_wq, amt_event_work);
INIT_LIST_HEAD(&amt->tunnel_list); INIT_LIST_HEAD(&amt->tunnel_list);
return 0; return 0;
err: err:
dev_put(amt->stream_dev); dev_put(amt->stream_dev);
@ -3280,7 +3415,7 @@ static int __init amt_init(void)
if (err < 0) if (err < 0)
goto unregister_notifier; goto unregister_notifier;
amt_wq = alloc_workqueue("amt", WQ_UNBOUND, 1); amt_wq = alloc_workqueue("amt", WQ_UNBOUND, 0);
if (!amt_wq) { if (!amt_wq) {
err = -ENOMEM; err = -ENOMEM;
goto rtnl_unregister; goto rtnl_unregister;

View File

@ -1843,6 +1843,7 @@ static int rcar_canfd_probe(struct platform_device *pdev)
of_child = of_get_child_by_name(pdev->dev.of_node, name); of_child = of_get_child_by_name(pdev->dev.of_node, name);
if (of_child && of_device_is_available(of_child)) if (of_child && of_device_is_available(of_child))
channels_mask |= BIT(i); channels_mask |= BIT(i);
of_node_put(of_child);
} }
if (chip_id != RENESAS_RZG2L) { if (chip_id != RENESAS_RZG2L) {

View File

@ -1690,8 +1690,8 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
u32 osc; u32 osc;
int err; int err;
/* The OSC_LPMEN is only supported on MCP2518FD, so use it to /* The OSC_LPMEN is only supported on MCP2518FD and MCP251863,
* autodetect the model. * so use it to autodetect the model.
*/ */
err = regmap_update_bits(priv->map_reg, MCP251XFD_REG_OSC, err = regmap_update_bits(priv->map_reg, MCP251XFD_REG_OSC,
MCP251XFD_REG_OSC_LPMEN, MCP251XFD_REG_OSC_LPMEN,
@ -1703,10 +1703,18 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
if (err) if (err)
return err; return err;
if (osc & MCP251XFD_REG_OSC_LPMEN) if (osc & MCP251XFD_REG_OSC_LPMEN) {
devtype_data = &mcp251xfd_devtype_data_mcp2518fd; /* We cannot distinguish between MCP2518FD and
* MCP251863. If firmware specifies MCP251863, keep
* it, otherwise set to MCP2518FD.
*/
if (mcp251xfd_is_251863(priv))
devtype_data = &mcp251xfd_devtype_data_mcp251863;
else else
devtype_data = &mcp251xfd_devtype_data_mcp2518fd;
} else {
devtype_data = &mcp251xfd_devtype_data_mcp2517fd; devtype_data = &mcp251xfd_devtype_data_mcp2517fd;
}
if (!mcp251xfd_is_251XFD(priv) && if (!mcp251xfd_is_251XFD(priv) &&
priv->devtype_data.model != devtype_data->model) { priv->devtype_data.model != devtype_data->model) {

View File

@ -1038,18 +1038,21 @@ int ksz_switch_register(struct ksz_device *dev,
ports = of_get_child_by_name(dev->dev->of_node, "ethernet-ports"); ports = of_get_child_by_name(dev->dev->of_node, "ethernet-ports");
if (!ports) if (!ports)
ports = of_get_child_by_name(dev->dev->of_node, "ports"); ports = of_get_child_by_name(dev->dev->of_node, "ports");
if (ports) if (ports) {
for_each_available_child_of_node(ports, port) { for_each_available_child_of_node(ports, port) {
if (of_property_read_u32(port, "reg", if (of_property_read_u32(port, "reg",
&port_num)) &port_num))
continue; continue;
if (!(dev->port_mask & BIT(port_num))) { if (!(dev->port_mask & BIT(port_num))) {
of_node_put(port); of_node_put(port);
of_node_put(ports);
return -EINVAL; return -EINVAL;
} }
of_get_phy_mode(port, of_get_phy_mode(port,
&dev->ports[port_num].interface); &dev->ports[port_num].interface);
} }
of_node_put(ports);
}
dev->synclko_125 = of_property_read_bool(dev->dev->of_node, dev->synclko_125 = of_property_read_bool(dev->dev->of_node,
"microchip,synclko-125"); "microchip,synclko-125");
dev->synclko_disable = of_property_read_bool(dev->dev->of_node, dev->synclko_disable = of_property_read_bool(dev->dev->of_node,

View File

@ -3382,12 +3382,28 @@ static const struct of_device_id sja1105_dt_ids[] = {
}; };
MODULE_DEVICE_TABLE(of, sja1105_dt_ids); MODULE_DEVICE_TABLE(of, sja1105_dt_ids);
static const struct spi_device_id sja1105_spi_ids[] = {
{ "sja1105e" },
{ "sja1105t" },
{ "sja1105p" },
{ "sja1105q" },
{ "sja1105r" },
{ "sja1105s" },
{ "sja1110a" },
{ "sja1110b" },
{ "sja1110c" },
{ "sja1110d" },
{ },
};
MODULE_DEVICE_TABLE(spi, sja1105_spi_ids);
static struct spi_driver sja1105_driver = { static struct spi_driver sja1105_driver = {
.driver = { .driver = {
.name = "sja1105", .name = "sja1105",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.of_match_table = of_match_ptr(sja1105_dt_ids), .of_match_table = of_match_ptr(sja1105_dt_ids),
}, },
.id_table = sja1105_spi_ids,
.probe = sja1105_probe, .probe = sja1105_probe,
.remove = sja1105_remove, .remove = sja1105_remove,
.shutdown = sja1105_shutdown, .shutdown = sja1105_shutdown,

View File

@ -205,10 +205,20 @@ static const struct of_device_id vsc73xx_of_match[] = {
}; };
MODULE_DEVICE_TABLE(of, vsc73xx_of_match); MODULE_DEVICE_TABLE(of, vsc73xx_of_match);
static const struct spi_device_id vsc73xx_spi_ids[] = {
{ "vsc7385" },
{ "vsc7388" },
{ "vsc7395" },
{ "vsc7398" },
{ },
};
MODULE_DEVICE_TABLE(spi, vsc73xx_spi_ids);
static struct spi_driver vsc73xx_spi_driver = { static struct spi_driver vsc73xx_spi_driver = {
.probe = vsc73xx_spi_probe, .probe = vsc73xx_spi_probe,
.remove = vsc73xx_spi_remove, .remove = vsc73xx_spi_remove,
.shutdown = vsc73xx_spi_shutdown, .shutdown = vsc73xx_spi_shutdown,
.id_table = vsc73xx_spi_ids,
.driver = { .driver = {
.name = "vsc73xx-spi", .name = "vsc73xx-spi",
.of_match_table = vsc73xx_of_match, .of_match_table = vsc73xx_of_match,

View File

@ -1236,8 +1236,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
csk->sndbuf = newsk->sk_sndbuf; csk->sndbuf = newsk->sk_sndbuf;
csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx; csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx;
RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk), RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk),
sock_net(newsk)-> READ_ONCE(sock_net(newsk)->
ipv4.sysctl_tcp_window_scaling, ipv4.sysctl_tcp_window_scaling),
tp->window_clamp); tp->window_clamp);
neigh_release(n); neigh_release(n);
inet_inherit_port(&tcp_hashinfo, lsk, newsk); inet_inherit_port(&tcp_hashinfo, lsk, newsk);
@ -1384,7 +1384,7 @@ static void chtls_pass_accept_request(struct sock *sk,
#endif #endif
} }
if (req->tcpopt.wsf <= 14 && if (req->tcpopt.wsf <= 14 &&
sock_net(sk)->ipv4.sysctl_tcp_window_scaling) { READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) {
inet_rsk(oreq)->wscale_ok = 1; inet_rsk(oreq)->wscale_ok = 1;
inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf; inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf;
} }

View File

@ -2287,7 +2287,7 @@ err:
/* Uses sync mcc */ /* Uses sync mcc */
int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
u8 page_num, u8 *data) u8 page_num, u32 off, u32 len, u8 *data)
{ {
struct be_dma_mem cmd; struct be_dma_mem cmd;
struct be_mcc_wrb *wrb; struct be_mcc_wrb *wrb;
@ -2321,10 +2321,10 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
req->port = cpu_to_le32(adapter->hba_port_num); req->port = cpu_to_le32(adapter->hba_port_num);
req->page_num = cpu_to_le32(page_num); req->page_num = cpu_to_le32(page_num);
status = be_mcc_notify_wait(adapter); status = be_mcc_notify_wait(adapter);
if (!status) { if (!status && len > 0) {
struct be_cmd_resp_port_type *resp = cmd.va; struct be_cmd_resp_port_type *resp = cmd.va;
memcpy(data, resp->page_data, PAGE_DATA_LEN); memcpy(data, resp->page_data + off, len);
} }
err: err:
mutex_unlock(&adapter->mcc_lock); mutex_unlock(&adapter->mcc_lock);
@ -2415,7 +2415,7 @@ int be_cmd_query_cable_type(struct be_adapter *adapter)
int status; int status;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
page_data); 0, PAGE_DATA_LEN, page_data);
if (!status) { if (!status) {
switch (adapter->phy.interface_type) { switch (adapter->phy.interface_type) {
case PHY_TYPE_QSFP: case PHY_TYPE_QSFP:
@ -2440,7 +2440,7 @@ int be_cmd_query_sfp_info(struct be_adapter *adapter)
int status; int status;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
page_data); 0, PAGE_DATA_LEN, page_data);
if (!status) { if (!status) {
strlcpy(adapter->phy.vendor_name, page_data + strlcpy(adapter->phy.vendor_name, page_data +
SFP_VENDOR_NAME_OFFSET, SFP_VENDOR_NAME_LEN - 1); SFP_VENDOR_NAME_OFFSET, SFP_VENDOR_NAME_LEN - 1);

View File

@ -2427,7 +2427,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, u8 beacon,
int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num,
u32 *state); u32 *state);
int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
u8 page_num, u8 *data); u8 page_num, u32 off, u32 len, u8 *data);
int be_cmd_query_cable_type(struct be_adapter *adapter); int be_cmd_query_cable_type(struct be_adapter *adapter);
int be_cmd_query_sfp_info(struct be_adapter *adapter); int be_cmd_query_sfp_info(struct be_adapter *adapter);
int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd, int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd,

View File

@ -1344,7 +1344,7 @@ static int be_get_module_info(struct net_device *netdev,
return -EOPNOTSUPP; return -EOPNOTSUPP;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
page_data); 0, PAGE_DATA_LEN, page_data);
if (!status) { if (!status) {
if (!page_data[SFP_PLUS_SFF_8472_COMP]) { if (!page_data[SFP_PLUS_SFF_8472_COMP]) {
modinfo->type = ETH_MODULE_SFF_8079; modinfo->type = ETH_MODULE_SFF_8079;
@ -1362,25 +1362,32 @@ static int be_get_module_eeprom(struct net_device *netdev,
{ {
struct be_adapter *adapter = netdev_priv(netdev); struct be_adapter *adapter = netdev_priv(netdev);
int status; int status;
u32 begin, end;
if (!check_privilege(adapter, MAX_PRIVILEGES)) if (!check_privilege(adapter, MAX_PRIVILEGES))
return -EOPNOTSUPP; return -EOPNOTSUPP;
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, begin = eeprom->offset;
end = eeprom->offset + eeprom->len;
if (begin < PAGE_DATA_LEN) {
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, begin,
min_t(u32, end, PAGE_DATA_LEN) - begin,
data); data);
if (status) if (status)
goto err; goto err;
if (eeprom->offset + eeprom->len > PAGE_DATA_LEN) { data += PAGE_DATA_LEN - begin;
status = be_cmd_read_port_transceiver_data(adapter, begin = PAGE_DATA_LEN;
TR_PAGE_A2, }
data +
PAGE_DATA_LEN); if (end > PAGE_DATA_LEN) {
status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A2,
begin - PAGE_DATA_LEN,
end - begin, data);
if (status) if (status)
goto err; goto err;
} }
if (eeprom->offset)
memcpy(data, data + eeprom->offset, eeprom->len);
err: err:
return be_cmd_status(status); return be_cmd_status(status);
} }

View File

@ -630,7 +630,6 @@ struct e1000_phy_info {
bool disable_polarity_correction; bool disable_polarity_correction;
bool is_mdix; bool is_mdix;
bool polarity_correction; bool polarity_correction;
bool reset_disable;
bool speed_downgraded; bool speed_downgraded;
bool autoneg_wait_to_complete; bool autoneg_wait_to_complete;
}; };

View File

@ -2050,10 +2050,6 @@ static s32 e1000_check_reset_block_ich8lan(struct e1000_hw *hw)
bool blocked = false; bool blocked = false;
int i = 0; int i = 0;
/* Check the PHY (LCD) reset flag */
if (hw->phy.reset_disable)
return true;
while ((blocked = !(er32(FWSM) & E1000_ICH_FWSM_RSPCIPHY)) && while ((blocked = !(er32(FWSM) & E1000_ICH_FWSM_RSPCIPHY)) &&
(i++ < 30)) (i++ < 30))
usleep_range(10000, 11000); usleep_range(10000, 11000);

View File

@ -271,7 +271,6 @@
#define I217_CGFREG_ENABLE_MTA_RESET 0x0002 #define I217_CGFREG_ENABLE_MTA_RESET 0x0002
#define I217_MEMPWR PHY_REG(772, 26) #define I217_MEMPWR PHY_REG(772, 26)
#define I217_MEMPWR_DISABLE_SMB_RELEASE 0x0010 #define I217_MEMPWR_DISABLE_SMB_RELEASE 0x0010
#define I217_MEMPWR_MOEM 0x1000
/* Receive Address Initial CRC Calculation */ /* Receive Address Initial CRC Calculation */
#define E1000_PCH_RAICC(_n) (0x05F50 + ((_n) * 4)) #define E1000_PCH_RAICC(_n) (0x05F50 + ((_n) * 4))

View File

@ -6494,6 +6494,10 @@ static void e1000e_s0ix_exit_flow(struct e1000_adapter *adapter)
if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID && if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
hw->mac.type >= e1000_pch_adp) { hw->mac.type >= e1000_pch_adp) {
/* Keep the GPT clock enabled for CSME */
mac_data = er32(FEXTNVM);
mac_data |= BIT(3);
ew32(FEXTNVM, mac_data);
/* Request ME unconfigure the device from S0ix */ /* Request ME unconfigure the device from S0ix */
mac_data = er32(H2ME); mac_data = er32(H2ME);
mac_data &= ~E1000_H2ME_START_DPG; mac_data &= ~E1000_H2ME_START_DPG;
@ -6987,21 +6991,8 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev)
struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_adapter *adapter = netdev_priv(netdev);
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
struct e1000_hw *hw = &adapter->hw;
u16 phy_data;
int rc; int rc;
if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
hw->mac.type >= e1000_pch_adp) {
/* Mask OEM Bits / Gig Disable / Restart AN (772_26[12] = 1) */
e1e_rphy(hw, I217_MEMPWR, &phy_data);
phy_data |= I217_MEMPWR_MOEM;
e1e_wphy(hw, I217_MEMPWR, phy_data);
/* Disable LCD reset */
hw->phy.reset_disable = true;
}
e1000e_flush_lpic(pdev); e1000e_flush_lpic(pdev);
e1000e_pm_freeze(dev); e1000e_pm_freeze(dev);
@ -7023,8 +7014,6 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_adapter *adapter = netdev_priv(netdev);
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
struct e1000_hw *hw = &adapter->hw;
u16 phy_data;
int rc; int rc;
/* Introduce S0ix implementation */ /* Introduce S0ix implementation */
@ -7035,17 +7024,6 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
if (rc) if (rc)
return rc; return rc;
if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
hw->mac.type >= e1000_pch_adp) {
/* Unmask OEM Bits / Gig Disable / Restart AN 772_26[12] = 0 */
e1e_rphy(hw, I217_MEMPWR, &phy_data);
phy_data &= ~I217_MEMPWR_MOEM;
e1e_wphy(hw, I217_MEMPWR, phy_data);
/* Enable LCD reset */
hw->phy.reset_disable = false;
}
return e1000e_pm_thaw(dev); return e1000e_pm_thaw(dev);
} }

View File

@ -10650,7 +10650,7 @@ static int i40e_reset(struct i40e_pf *pf)
**/ **/
static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
{ {
int old_recovery_mode_bit = test_bit(__I40E_RECOVERY_MODE, pf->state); const bool is_recovery_mode_reported = i40e_check_recovery_mode(pf);
struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
struct i40e_hw *hw = &pf->hw; struct i40e_hw *hw = &pf->hw;
i40e_status ret; i40e_status ret;
@ -10658,13 +10658,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
int v; int v;
if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) && if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
i40e_check_recovery_mode(pf)) { is_recovery_mode_reported)
i40e_set_ethtool_ops(pf->vsi[pf->lan_vsi]->netdev); i40e_set_ethtool_ops(pf->vsi[pf->lan_vsi]->netdev);
}
if (test_bit(__I40E_DOWN, pf->state) && if (test_bit(__I40E_DOWN, pf->state) &&
!test_bit(__I40E_RECOVERY_MODE, pf->state) && !test_bit(__I40E_RECOVERY_MODE, pf->state))
!old_recovery_mode_bit)
goto clear_recovery; goto clear_recovery;
dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n"); dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n");
@ -10691,13 +10689,12 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
* accordingly with regard to resources initialization * accordingly with regard to resources initialization
* and deinitialization * and deinitialization
*/ */
if (test_bit(__I40E_RECOVERY_MODE, pf->state) || if (test_bit(__I40E_RECOVERY_MODE, pf->state)) {
old_recovery_mode_bit) {
if (i40e_get_capabilities(pf, if (i40e_get_capabilities(pf,
i40e_aqc_opc_list_func_capabilities)) i40e_aqc_opc_list_func_capabilities))
goto end_unlock; goto end_unlock;
if (test_bit(__I40E_RECOVERY_MODE, pf->state)) { if (is_recovery_mode_reported) {
/* we're staying in recovery mode so we'll reinitialize /* we're staying in recovery mode so we'll reinitialize
* misc vector here * misc vector here
*/ */

View File

@ -64,7 +64,6 @@ struct iavf_vsi {
u16 id; u16 id;
DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__); DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__);
int base_vector; int base_vector;
u16 work_limit;
u16 qs_handle; u16 qs_handle;
void *priv; /* client driver data reference. */ void *priv; /* client driver data reference. */
}; };
@ -159,8 +158,12 @@ struct iavf_vlan {
struct iavf_vlan_filter { struct iavf_vlan_filter {
struct list_head list; struct list_head list;
struct iavf_vlan vlan; struct iavf_vlan vlan;
bool remove; /* filter needs to be removed */ struct {
bool add; /* filter needs to be added */ u8 is_new_vlan:1; /* filter is new, wait for PF answer */
u8 remove:1; /* filter needs to be removed */
u8 add:1; /* filter needs to be added */
u8 padding:5;
};
}; };
#define IAVF_MAX_TRAFFIC_CLASS 4 #define IAVF_MAX_TRAFFIC_CLASS 4
@ -461,6 +464,10 @@ static inline const char *iavf_state_str(enum iavf_state_t state)
return "__IAVF_INIT_VERSION_CHECK"; return "__IAVF_INIT_VERSION_CHECK";
case __IAVF_INIT_GET_RESOURCES: case __IAVF_INIT_GET_RESOURCES:
return "__IAVF_INIT_GET_RESOURCES"; return "__IAVF_INIT_GET_RESOURCES";
case __IAVF_INIT_EXTENDED_CAPS:
return "__IAVF_INIT_EXTENDED_CAPS";
case __IAVF_INIT_CONFIG_ADAPTER:
return "__IAVF_INIT_CONFIG_ADAPTER";
case __IAVF_INIT_SW: case __IAVF_INIT_SW:
return "__IAVF_INIT_SW"; return "__IAVF_INIT_SW";
case __IAVF_INIT_FAILED: case __IAVF_INIT_FAILED:
@ -520,6 +527,7 @@ int iavf_get_vf_config(struct iavf_adapter *adapter);
int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter); int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter);
int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter); int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter);
void iavf_set_queue_vlan_tag_loc(struct iavf_adapter *adapter); void iavf_set_queue_vlan_tag_loc(struct iavf_adapter *adapter);
u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter);
void iavf_irq_enable(struct iavf_adapter *adapter, bool flush); void iavf_irq_enable(struct iavf_adapter *adapter, bool flush);
void iavf_configure_queues(struct iavf_adapter *adapter); void iavf_configure_queues(struct iavf_adapter *adapter);
void iavf_deconfigure_queues(struct iavf_adapter *adapter); void iavf_deconfigure_queues(struct iavf_adapter *adapter);

View File

@ -692,12 +692,8 @@ static int __iavf_get_coalesce(struct net_device *netdev,
struct ethtool_coalesce *ec, int queue) struct ethtool_coalesce *ec, int queue)
{ {
struct iavf_adapter *adapter = netdev_priv(netdev); struct iavf_adapter *adapter = netdev_priv(netdev);
struct iavf_vsi *vsi = &adapter->vsi;
struct iavf_ring *rx_ring, *tx_ring; struct iavf_ring *rx_ring, *tx_ring;
ec->tx_max_coalesced_frames = vsi->work_limit;
ec->rx_max_coalesced_frames = vsi->work_limit;
/* Rx and Tx usecs per queue value. If user doesn't specify the /* Rx and Tx usecs per queue value. If user doesn't specify the
* queue, return queue 0's value to represent. * queue, return queue 0's value to represent.
*/ */
@ -825,12 +821,8 @@ static int __iavf_set_coalesce(struct net_device *netdev,
struct ethtool_coalesce *ec, int queue) struct ethtool_coalesce *ec, int queue)
{ {
struct iavf_adapter *adapter = netdev_priv(netdev); struct iavf_adapter *adapter = netdev_priv(netdev);
struct iavf_vsi *vsi = &adapter->vsi;
int i; int i;
if (ec->tx_max_coalesced_frames_irq || ec->rx_max_coalesced_frames_irq)
vsi->work_limit = ec->tx_max_coalesced_frames_irq;
if (ec->rx_coalesce_usecs == 0) { if (ec->rx_coalesce_usecs == 0) {
if (ec->use_adaptive_rx_coalesce) if (ec->use_adaptive_rx_coalesce)
netif_info(adapter, drv, netdev, "rx-usecs=0, need to disable adaptive-rx for a complete disable\n"); netif_info(adapter, drv, netdev, "rx-usecs=0, need to disable adaptive-rx for a complete disable\n");
@ -1969,8 +1961,6 @@ static int iavf_set_rxfh(struct net_device *netdev, const u32 *indir,
static const struct ethtool_ops iavf_ethtool_ops = { static const struct ethtool_ops iavf_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS | .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_MAX_FRAMES |
ETHTOOL_COALESCE_MAX_FRAMES_IRQ |
ETHTOOL_COALESCE_USE_ADAPTIVE, ETHTOOL_COALESCE_USE_ADAPTIVE,
.get_drvinfo = iavf_get_drvinfo, .get_drvinfo = iavf_get_drvinfo,
.get_link = ethtool_op_get_link, .get_link = ethtool_op_get_link,

View File

@ -843,7 +843,7 @@ static void iavf_restore_filters(struct iavf_adapter *adapter)
* iavf_get_num_vlans_added - get number of VLANs added * iavf_get_num_vlans_added - get number of VLANs added
* @adapter: board private structure * @adapter: board private structure
*/ */
static u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter) u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter)
{ {
return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) + return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) +
bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID); bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID);
@ -906,11 +906,6 @@ static int iavf_vlan_rx_add_vid(struct net_device *netdev,
if (!iavf_add_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)))) if (!iavf_add_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto))))
return -ENOMEM; return -ENOMEM;
if (proto == cpu_to_be16(ETH_P_8021Q))
set_bit(vid, adapter->vsi.active_cvlans);
else
set_bit(vid, adapter->vsi.active_svlans);
return 0; return 0;
} }
@ -2245,7 +2240,6 @@ int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter)
adapter->vsi.back = adapter; adapter->vsi.back = adapter;
adapter->vsi.base_vector = 1; adapter->vsi.base_vector = 1;
adapter->vsi.work_limit = IAVF_DEFAULT_IRQ_WORK;
vsi->netdev = adapter->netdev; vsi->netdev = adapter->netdev;
vsi->qs_handle = adapter->vsi_res->qset_handle; vsi->qs_handle = adapter->vsi_res->qset_handle;
if (adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { if (adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
@ -2956,6 +2950,9 @@ continue_reset:
adapter->aq_required |= IAVF_FLAG_AQ_ADD_CLOUD_FILTER; adapter->aq_required |= IAVF_FLAG_AQ_ADD_CLOUD_FILTER;
iavf_misc_irq_enable(adapter); iavf_misc_irq_enable(adapter);
bitmap_clear(adapter->vsi.active_cvlans, 0, VLAN_N_VID);
bitmap_clear(adapter->vsi.active_svlans, 0, VLAN_N_VID);
mod_delayed_work(iavf_wq, &adapter->watchdog_task, 2); mod_delayed_work(iavf_wq, &adapter->watchdog_task, 2);
/* We were running when the reset started, so we need to restore some /* We were running when the reset started, so we need to restore some

View File

@ -194,7 +194,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,
struct iavf_tx_buffer *tx_buf; struct iavf_tx_buffer *tx_buf;
struct iavf_tx_desc *tx_desc; struct iavf_tx_desc *tx_desc;
unsigned int total_bytes = 0, total_packets = 0; unsigned int total_bytes = 0, total_packets = 0;
unsigned int budget = vsi->work_limit; unsigned int budget = IAVF_DEFAULT_IRQ_WORK;
tx_buf = &tx_ring->tx_bi[i]; tx_buf = &tx_ring->tx_bi[i];
tx_desc = IAVF_TX_DESC(tx_ring, i); tx_desc = IAVF_TX_DESC(tx_ring, i);
@ -1285,11 +1285,10 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
{ {
struct iavf_rx_buffer *rx_buffer; struct iavf_rx_buffer *rx_buffer;
if (!size)
return NULL;
rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
prefetchw(rx_buffer->page); prefetchw(rx_buffer->page);
if (!size)
return rx_buffer;
/* we are reusing so sync this buffer for CPU use */ /* we are reusing so sync this buffer for CPU use */
dma_sync_single_range_for_cpu(rx_ring->dev, dma_sync_single_range_for_cpu(rx_ring->dev,

View File

@ -626,6 +626,33 @@ static void iavf_mac_add_reject(struct iavf_adapter *adapter)
spin_unlock_bh(&adapter->mac_vlan_list_lock); spin_unlock_bh(&adapter->mac_vlan_list_lock);
} }
/**
* iavf_vlan_add_reject
* @adapter: adapter structure
*
* Remove VLAN filters from list based on PF response.
**/
static void iavf_vlan_add_reject(struct iavf_adapter *adapter)
{
struct iavf_vlan_filter *f, *ftmp;
spin_lock_bh(&adapter->mac_vlan_list_lock);
list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
if (f->is_new_vlan) {
if (f->vlan.tpid == ETH_P_8021Q)
clear_bit(f->vlan.vid,
adapter->vsi.active_cvlans);
else
clear_bit(f->vlan.vid,
adapter->vsi.active_svlans);
list_del(&f->list);
kfree(f);
}
}
spin_unlock_bh(&adapter->mac_vlan_list_lock);
}
/** /**
* iavf_add_vlans * iavf_add_vlans
* @adapter: adapter structure * @adapter: adapter structure
@ -683,6 +710,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
vvfl->vlan_id[i] = f->vlan.vid; vvfl->vlan_id[i] = f->vlan.vid;
i++; i++;
f->add = false; f->add = false;
f->is_new_vlan = true;
if (i == count) if (i == count)
break; break;
} }
@ -695,10 +723,18 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN, (u8 *)vvfl, len); iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN, (u8 *)vvfl, len);
kfree(vvfl); kfree(vvfl);
} else { } else {
u16 max_vlans = adapter->vlan_v2_caps.filtering.max_filters;
u16 current_vlans = iavf_get_num_vlans_added(adapter);
struct virtchnl_vlan_filter_list_v2 *vvfl_v2; struct virtchnl_vlan_filter_list_v2 *vvfl_v2;
adapter->current_op = VIRTCHNL_OP_ADD_VLAN_V2; adapter->current_op = VIRTCHNL_OP_ADD_VLAN_V2;
if ((count + current_vlans) > max_vlans &&
current_vlans < max_vlans) {
count = max_vlans - iavf_get_num_vlans_added(adapter);
more = true;
}
len = sizeof(*vvfl_v2) + ((count - 1) * len = sizeof(*vvfl_v2) + ((count - 1) *
sizeof(struct virtchnl_vlan_filter)); sizeof(struct virtchnl_vlan_filter));
if (len > IAVF_MAX_AQ_BUF_SIZE) { if (len > IAVF_MAX_AQ_BUF_SIZE) {
@ -725,6 +761,9 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
&adapter->vlan_v2_caps.filtering.filtering_support; &adapter->vlan_v2_caps.filtering.filtering_support;
struct virtchnl_vlan *vlan; struct virtchnl_vlan *vlan;
if (i == count)
break;
/* give priority over outer if it's enabled */ /* give priority over outer if it's enabled */
if (filtering_support->outer) if (filtering_support->outer)
vlan = &vvfl_v2->filters[i].outer; vlan = &vvfl_v2->filters[i].outer;
@ -736,8 +775,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
i++; i++;
f->add = false; f->add = false;
if (i == count) f->is_new_vlan = true;
break;
} }
} }
@ -2080,6 +2118,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
*/ */
iavf_netdev_features_vlan_strip_set(netdev, true); iavf_netdev_features_vlan_strip_set(netdev, true);
break; break;
case VIRTCHNL_OP_ADD_VLAN_V2:
iavf_vlan_add_reject(adapter);
dev_warn(&adapter->pdev->dev, "Failed to add VLAN filter, error %s\n",
iavf_stat_str(&adapter->hw, v_retval));
break;
default: default:
dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n", dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n",
v_retval, iavf_stat_str(&adapter->hw, v_retval), v_retval, iavf_stat_str(&adapter->hw, v_retval),
@ -2332,6 +2375,24 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
spin_unlock_bh(&adapter->adv_rss_lock); spin_unlock_bh(&adapter->adv_rss_lock);
} }
break; break;
case VIRTCHNL_OP_ADD_VLAN_V2: {
struct iavf_vlan_filter *f;
spin_lock_bh(&adapter->mac_vlan_list_lock);
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
if (f->is_new_vlan) {
f->is_new_vlan = false;
if (f->vlan.tpid == ETH_P_8021Q)
set_bit(f->vlan.vid,
adapter->vsi.active_cvlans);
else
set_bit(f->vlan.vid,
adapter->vsi.active_svlans);
}
}
spin_unlock_bh(&adapter->mac_vlan_list_lock);
}
break;
case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING: case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
/* PF enabled vlan strip on this VF. /* PF enabled vlan strip on this VF.
* Update netdev->features if needed to be in sync with ethtool. * Update netdev->features if needed to be in sync with ethtool.

View File

@ -6171,6 +6171,9 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg)
u8 __iomem *hw_addr = READ_ONCE(hw->hw_addr); u8 __iomem *hw_addr = READ_ONCE(hw->hw_addr);
u32 value = 0; u32 value = 0;
if (IGC_REMOVED(hw_addr))
return ~value;
value = readl(&hw_addr[reg]); value = readl(&hw_addr[reg]);
/* reads should not return all F's */ /* reads should not return all F's */

View File

@ -306,6 +306,7 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg);
#define wr32(reg, val) \ #define wr32(reg, val) \
do { \ do { \
u8 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \ u8 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \
if (!IGC_REMOVED(hw_addr)) \
writel((val), &hw_addr[(reg)]); \ writel((val), &hw_addr[(reg)]); \
} while (0) } while (0)
@ -318,4 +319,6 @@ do { \
#define array_rd32(reg, offset) (igc_rd32(hw, (reg) + ((offset) << 2))) #define array_rd32(reg, offset) (igc_rd32(hw, (reg) + ((offset) << 2)))
#define IGC_REMOVED(h) unlikely(!(h))
#endif #endif

View File

@ -779,6 +779,7 @@ struct ixgbe_adapter {
#ifdef CONFIG_IXGBE_IPSEC #ifdef CONFIG_IXGBE_IPSEC
struct ixgbe_ipsec *ipsec; struct ixgbe_ipsec *ipsec;
#endif /* CONFIG_IXGBE_IPSEC */ #endif /* CONFIG_IXGBE_IPSEC */
spinlock_t vfs_lock;
}; };
static inline int ixgbe_determine_xdp_q_idx(int cpu) static inline int ixgbe_determine_xdp_q_idx(int cpu)

View File

@ -6403,6 +6403,9 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
/* n-tuple support exists, always init our spinlock */ /* n-tuple support exists, always init our spinlock */
spin_lock_init(&adapter->fdir_perfect_lock); spin_lock_init(&adapter->fdir_perfect_lock);
/* init spinlock to avoid concurrency of VF resources */
spin_lock_init(&adapter->vfs_lock);
#ifdef CONFIG_IXGBE_DCB #ifdef CONFIG_IXGBE_DCB
ixgbe_init_dcb(adapter); ixgbe_init_dcb(adapter);
#endif #endif

View File

@ -205,10 +205,13 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, unsigned int max_vfs)
int ixgbe_disable_sriov(struct ixgbe_adapter *adapter) int ixgbe_disable_sriov(struct ixgbe_adapter *adapter)
{ {
unsigned int num_vfs = adapter->num_vfs, vf; unsigned int num_vfs = adapter->num_vfs, vf;
unsigned long flags;
int rss; int rss;
spin_lock_irqsave(&adapter->vfs_lock, flags);
/* set num VFs to 0 to prevent access to vfinfo */ /* set num VFs to 0 to prevent access to vfinfo */
adapter->num_vfs = 0; adapter->num_vfs = 0;
spin_unlock_irqrestore(&adapter->vfs_lock, flags);
/* put the reference to all of the vf devices */ /* put the reference to all of the vf devices */
for (vf = 0; vf < num_vfs; ++vf) { for (vf = 0; vf < num_vfs; ++vf) {
@ -1355,8 +1358,10 @@ static void ixgbe_rcv_ack_from_vf(struct ixgbe_adapter *adapter, u32 vf)
void ixgbe_msg_task(struct ixgbe_adapter *adapter) void ixgbe_msg_task(struct ixgbe_adapter *adapter)
{ {
struct ixgbe_hw *hw = &adapter->hw; struct ixgbe_hw *hw = &adapter->hw;
unsigned long flags;
u32 vf; u32 vf;
spin_lock_irqsave(&adapter->vfs_lock, flags);
for (vf = 0; vf < adapter->num_vfs; vf++) { for (vf = 0; vf < adapter->num_vfs; vf++) {
/* process any reset requests */ /* process any reset requests */
if (!ixgbe_check_for_rst(hw, vf)) if (!ixgbe_check_for_rst(hw, vf))
@ -1370,6 +1375,7 @@ void ixgbe_msg_task(struct ixgbe_adapter *adapter)
if (!ixgbe_check_for_ack(hw, vf)) if (!ixgbe_check_for_ack(hw, vf))
ixgbe_rcv_ack_from_vf(adapter, vf); ixgbe_rcv_ack_from_vf(adapter, vf);
} }
spin_unlock_irqrestore(&adapter->vfs_lock, flags);
} }
static inline void ixgbe_ping_vf(struct ixgbe_adapter *adapter, int vf) static inline void ixgbe_ping_vf(struct ixgbe_adapter *adapter, int vf)

View File

@ -167,12 +167,12 @@ static int prestera_flower_parse_meta(struct prestera_acl_rule *rule,
} }
port = netdev_priv(ingress_dev); port = netdev_priv(ingress_dev);
mask = htons(0x1FFF); mask = htons(0x1FFF << 3);
key = htons(port->hw_id); key = htons(port->hw_id << 3);
rule_match_set(r_match->key, SYS_PORT, key); rule_match_set(r_match->key, SYS_PORT, key);
rule_match_set(r_match->mask, SYS_PORT, mask); rule_match_set(r_match->mask, SYS_PORT, mask);
mask = htons(0x1FF); mask = htons(0x3FF);
key = htons(port->dev_id); key = htons(port->dev_id);
rule_match_set(r_match->key, SYS_DEV, key); rule_match_set(r_match->key, SYS_DEV, key);
rule_match_set(r_match->mask, SYS_DEV, mask); rule_match_set(r_match->mask, SYS_DEV, mask);

View File

@ -93,6 +93,9 @@ mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_i
}; };
struct net_device_path path = {}; struct net_device_path path = {};
if (!ctx.dev)
return -ENODEV;
memcpy(ctx.daddr, addr, sizeof(ctx.daddr)); memcpy(ctx.daddr, addr, sizeof(ctx.daddr));
if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED)) if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED))

View File

@ -651,7 +651,7 @@ mtk_wed_tx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs)
* WDMA RX. * WDMA RX.
*/ */
BUG_ON(idx > ARRAY_SIZE(dev->tx_ring)); BUG_ON(idx >= ARRAY_SIZE(dev->tx_ring));
if (mtk_wed_ring_alloc(dev, ring, MTK_WED_TX_RING_SIZE)) if (mtk_wed_ring_alloc(dev, ring, MTK_WED_TX_RING_SIZE))
return -ENOMEM; return -ENOMEM;

View File

@ -5384,7 +5384,7 @@ static bool mlxsw_sp_fi_is_gateway(const struct mlxsw_sp *mlxsw_sp,
{ {
const struct fib_nh *nh = fib_info_nh(fi, 0); const struct fib_nh *nh = fib_info_nh(fi, 0);
return nh->fib_nh_scope == RT_SCOPE_LINK || return nh->fib_nh_gw_family ||
mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, nh, NULL); mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, nh, NULL);
} }
@ -10324,7 +10324,7 @@ static void mlxsw_sp_mp4_hash_init(struct mlxsw_sp *mlxsw_sp,
unsigned long *fields = config->fields; unsigned long *fields = config->fields;
u32 hash_fields; u32 hash_fields;
switch (net->ipv4.sysctl_fib_multipath_hash_policy) { switch (READ_ONCE(net->ipv4.sysctl_fib_multipath_hash_policy)) {
case 0: case 0:
mlxsw_sp_mp4_hash_outer_addr(config); mlxsw_sp_mp4_hash_outer_addr(config);
break; break;
@ -10342,7 +10342,7 @@ static void mlxsw_sp_mp4_hash_init(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_mp_hash_inner_l3(config); mlxsw_sp_mp_hash_inner_l3(config);
break; break;
case 3: case 3:
hash_fields = net->ipv4.sysctl_fib_multipath_hash_fields; hash_fields = READ_ONCE(net->ipv4.sysctl_fib_multipath_hash_fields);
/* Outer */ /* Outer */
MLXSW_SP_MP_HASH_HEADER_SET(headers, IPV4_EN_NOT_TCP_NOT_UDP); MLXSW_SP_MP_HASH_HEADER_SET(headers, IPV4_EN_NOT_TCP_NOT_UDP);
MLXSW_SP_MP_HASH_HEADER_SET(headers, IPV4_EN_TCP_UDP); MLXSW_SP_MP_HASH_HEADER_SET(headers, IPV4_EN_TCP_UDP);
@ -10523,13 +10523,14 @@ static int mlxsw_sp_dscp_init(struct mlxsw_sp *mlxsw_sp)
static int __mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp) static int __mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp)
{ {
struct net *net = mlxsw_sp_net(mlxsw_sp); struct net *net = mlxsw_sp_net(mlxsw_sp);
bool usp = net->ipv4.sysctl_ip_fwd_update_priority;
char rgcr_pl[MLXSW_REG_RGCR_LEN]; char rgcr_pl[MLXSW_REG_RGCR_LEN];
u64 max_rifs; u64 max_rifs;
bool usp;
if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MAX_RIFS)) if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MAX_RIFS))
return -EIO; return -EIO;
max_rifs = MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); max_rifs = MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS);
usp = READ_ONCE(net->ipv4.sysctl_ip_fwd_update_priority);
mlxsw_reg_rgcr_pack(rgcr_pl, true, true); mlxsw_reg_rgcr_pack(rgcr_pl, true, true);
mlxsw_reg_rgcr_max_router_interfaces_set(rgcr_pl, max_rifs); mlxsw_reg_rgcr_max_router_interfaces_set(rgcr_pl, max_rifs);

View File

@ -75,6 +75,9 @@ static int __lan966x_mac_learn(struct lan966x *lan966x, int pgid,
unsigned int vid, unsigned int vid,
enum macaccess_entry_type type) enum macaccess_entry_type type)
{ {
int ret;
spin_lock(&lan966x->mac_lock);
lan966x_mac_select(lan966x, mac, vid); lan966x_mac_select(lan966x, mac, vid);
/* Issue a write command */ /* Issue a write command */
@ -86,7 +89,10 @@ static int __lan966x_mac_learn(struct lan966x *lan966x, int pgid,
ANA_MACACCESS_MAC_TABLE_CMD_SET(MACACCESS_CMD_LEARN), ANA_MACACCESS_MAC_TABLE_CMD_SET(MACACCESS_CMD_LEARN),
lan966x, ANA_MACACCESS); lan966x, ANA_MACACCESS);
return lan966x_mac_wait_for_completion(lan966x); ret = lan966x_mac_wait_for_completion(lan966x);
spin_unlock(&lan966x->mac_lock);
return ret;
} }
/* The mask of the front ports is encoded inside the mac parameter via a call /* The mask of the front ports is encoded inside the mac parameter via a call
@ -113,11 +119,13 @@ int lan966x_mac_learn(struct lan966x *lan966x, int port,
return __lan966x_mac_learn(lan966x, port, false, mac, vid, type); return __lan966x_mac_learn(lan966x, port, false, mac, vid, type);
} }
int lan966x_mac_forget(struct lan966x *lan966x, static int lan966x_mac_forget_locked(struct lan966x *lan966x,
const unsigned char mac[ETH_ALEN], const unsigned char mac[ETH_ALEN],
unsigned int vid, unsigned int vid,
enum macaccess_entry_type type) enum macaccess_entry_type type)
{ {
lockdep_assert_held(&lan966x->mac_lock);
lan966x_mac_select(lan966x, mac, vid); lan966x_mac_select(lan966x, mac, vid);
/* Issue a forget command */ /* Issue a forget command */
@ -128,6 +136,20 @@ int lan966x_mac_forget(struct lan966x *lan966x,
return lan966x_mac_wait_for_completion(lan966x); return lan966x_mac_wait_for_completion(lan966x);
} }
int lan966x_mac_forget(struct lan966x *lan966x,
const unsigned char mac[ETH_ALEN],
unsigned int vid,
enum macaccess_entry_type type)
{
int ret;
spin_lock(&lan966x->mac_lock);
ret = lan966x_mac_forget_locked(lan966x, mac, vid, type);
spin_unlock(&lan966x->mac_lock);
return ret;
}
int lan966x_mac_cpu_learn(struct lan966x *lan966x, const char *addr, u16 vid) int lan966x_mac_cpu_learn(struct lan966x *lan966x, const char *addr, u16 vid)
{ {
return lan966x_mac_learn(lan966x, PGID_CPU, addr, vid, ENTRYTYPE_LOCKED); return lan966x_mac_learn(lan966x, PGID_CPU, addr, vid, ENTRYTYPE_LOCKED);
@ -161,7 +183,7 @@ static struct lan966x_mac_entry *lan966x_mac_alloc_entry(const unsigned char *ma
{ {
struct lan966x_mac_entry *mac_entry; struct lan966x_mac_entry *mac_entry;
mac_entry = kzalloc(sizeof(*mac_entry), GFP_KERNEL); mac_entry = kzalloc(sizeof(*mac_entry), GFP_ATOMIC);
if (!mac_entry) if (!mac_entry)
return NULL; return NULL;
@ -179,7 +201,6 @@ static struct lan966x_mac_entry *lan966x_mac_find_entry(struct lan966x *lan966x,
struct lan966x_mac_entry *res = NULL; struct lan966x_mac_entry *res = NULL;
struct lan966x_mac_entry *mac_entry; struct lan966x_mac_entry *mac_entry;
spin_lock(&lan966x->mac_lock);
list_for_each_entry(mac_entry, &lan966x->mac_entries, list) { list_for_each_entry(mac_entry, &lan966x->mac_entries, list) {
if (mac_entry->vid == vid && if (mac_entry->vid == vid &&
ether_addr_equal(mac, mac_entry->mac) && ether_addr_equal(mac, mac_entry->mac) &&
@ -188,7 +209,6 @@ static struct lan966x_mac_entry *lan966x_mac_find_entry(struct lan966x *lan966x,
break; break;
} }
} }
spin_unlock(&lan966x->mac_lock);
return res; return res;
} }
@ -231,8 +251,11 @@ int lan966x_mac_add_entry(struct lan966x *lan966x, struct lan966x_port *port,
{ {
struct lan966x_mac_entry *mac_entry; struct lan966x_mac_entry *mac_entry;
if (lan966x_mac_lookup(lan966x, addr, vid, ENTRYTYPE_NORMAL)) spin_lock(&lan966x->mac_lock);
if (lan966x_mac_lookup(lan966x, addr, vid, ENTRYTYPE_NORMAL)) {
spin_unlock(&lan966x->mac_lock);
return 0; return 0;
}
/* In case the entry already exists, don't add it again to SW, /* In case the entry already exists, don't add it again to SW,
* just update HW, but we need to look in the actual HW because * just update HW, but we need to look in the actual HW because
@ -241,21 +264,25 @@ int lan966x_mac_add_entry(struct lan966x *lan966x, struct lan966x_port *port,
* add the entry but without the extern_learn flag. * add the entry but without the extern_learn flag.
*/ */
mac_entry = lan966x_mac_find_entry(lan966x, addr, vid, port->chip_port); mac_entry = lan966x_mac_find_entry(lan966x, addr, vid, port->chip_port);
if (mac_entry) if (mac_entry) {
return lan966x_mac_learn(lan966x, port->chip_port, spin_unlock(&lan966x->mac_lock);
addr, vid, ENTRYTYPE_LOCKED); goto mac_learn;
}
mac_entry = lan966x_mac_alloc_entry(addr, vid, port->chip_port); mac_entry = lan966x_mac_alloc_entry(addr, vid, port->chip_port);
if (!mac_entry) if (!mac_entry) {
spin_unlock(&lan966x->mac_lock);
return -ENOMEM; return -ENOMEM;
}
spin_lock(&lan966x->mac_lock);
list_add_tail(&mac_entry->list, &lan966x->mac_entries); list_add_tail(&mac_entry->list, &lan966x->mac_entries);
spin_unlock(&lan966x->mac_lock); spin_unlock(&lan966x->mac_lock);
lan966x_mac_learn(lan966x, port->chip_port, addr, vid, ENTRYTYPE_LOCKED);
lan966x_fdb_call_notifiers(SWITCHDEV_FDB_OFFLOADED, addr, vid, port->dev); lan966x_fdb_call_notifiers(SWITCHDEV_FDB_OFFLOADED, addr, vid, port->dev);
mac_learn:
lan966x_mac_learn(lan966x, port->chip_port, addr, vid, ENTRYTYPE_LOCKED);
return 0; return 0;
} }
@ -269,7 +296,8 @@ int lan966x_mac_del_entry(struct lan966x *lan966x, const unsigned char *addr,
list) { list) {
if (mac_entry->vid == vid && if (mac_entry->vid == vid &&
ether_addr_equal(addr, mac_entry->mac)) { ether_addr_equal(addr, mac_entry->mac)) {
lan966x_mac_forget(lan966x, mac_entry->mac, mac_entry->vid, lan966x_mac_forget_locked(lan966x, mac_entry->mac,
mac_entry->vid,
ENTRYTYPE_LOCKED); ENTRYTYPE_LOCKED);
list_del(&mac_entry->list); list_del(&mac_entry->list);
@ -288,8 +316,8 @@ void lan966x_mac_purge_entries(struct lan966x *lan966x)
spin_lock(&lan966x->mac_lock); spin_lock(&lan966x->mac_lock);
list_for_each_entry_safe(mac_entry, tmp, &lan966x->mac_entries, list_for_each_entry_safe(mac_entry, tmp, &lan966x->mac_entries,
list) { list) {
lan966x_mac_forget(lan966x, mac_entry->mac, mac_entry->vid, lan966x_mac_forget_locked(lan966x, mac_entry->mac,
ENTRYTYPE_LOCKED); mac_entry->vid, ENTRYTYPE_LOCKED);
list_del(&mac_entry->list); list_del(&mac_entry->list);
kfree(mac_entry); kfree(mac_entry);
@ -325,10 +353,13 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
{ {
struct lan966x_mac_entry *mac_entry, *tmp; struct lan966x_mac_entry *mac_entry, *tmp;
unsigned char mac[ETH_ALEN] __aligned(2); unsigned char mac[ETH_ALEN] __aligned(2);
struct list_head mac_deleted_entries;
u32 dest_idx; u32 dest_idx;
u32 column; u32 column;
u16 vid; u16 vid;
INIT_LIST_HEAD(&mac_deleted_entries);
spin_lock(&lan966x->mac_lock); spin_lock(&lan966x->mac_lock);
list_for_each_entry_safe(mac_entry, tmp, &lan966x->mac_entries, list) { list_for_each_entry_safe(mac_entry, tmp, &lan966x->mac_entries, list) {
bool found = false; bool found = false;
@ -362,19 +393,25 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
} }
if (!found) { if (!found) {
list_del(&mac_entry->list);
/* Move the entry from SW list to a tmp list such that
* it would be deleted later
*/
list_add_tail(&mac_entry->list, &mac_deleted_entries);
}
}
spin_unlock(&lan966x->mac_lock);
list_for_each_entry_safe(mac_entry, tmp, &mac_deleted_entries, list) {
/* Notify the bridge that the entry doesn't exist /* Notify the bridge that the entry doesn't exist
* anymore in the HW and remove the entry from the SW * anymore in the HW
* list
*/ */
lan966x_mac_notifiers(SWITCHDEV_FDB_DEL_TO_BRIDGE, lan966x_mac_notifiers(SWITCHDEV_FDB_DEL_TO_BRIDGE,
mac_entry->mac, mac_entry->vid, mac_entry->mac, mac_entry->vid,
lan966x->ports[mac_entry->port_index]->dev); lan966x->ports[mac_entry->port_index]->dev);
list_del(&mac_entry->list); list_del(&mac_entry->list);
kfree(mac_entry); kfree(mac_entry);
} }
}
spin_unlock(&lan966x->mac_lock);
/* Now go to the list of columns and see if any entry was not in the SW /* Now go to the list of columns and see if any entry was not in the SW
* list, then that means that the entry is new so it needs to notify the * list, then that means that the entry is new so it needs to notify the
@ -396,13 +433,20 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
if (WARN_ON(dest_idx >= lan966x->num_phys_ports)) if (WARN_ON(dest_idx >= lan966x->num_phys_ports))
continue; continue;
spin_lock(&lan966x->mac_lock);
mac_entry = lan966x_mac_find_entry(lan966x, mac, vid, dest_idx);
if (mac_entry) {
spin_unlock(&lan966x->mac_lock);
continue;
}
mac_entry = lan966x_mac_alloc_entry(mac, vid, dest_idx); mac_entry = lan966x_mac_alloc_entry(mac, vid, dest_idx);
if (!mac_entry) if (!mac_entry) {
spin_unlock(&lan966x->mac_lock);
return; return;
}
mac_entry->row = row; mac_entry->row = row;
spin_lock(&lan966x->mac_lock);
list_add_tail(&mac_entry->list, &lan966x->mac_entries); list_add_tail(&mac_entry->list, &lan966x->mac_entries);
spin_unlock(&lan966x->mac_lock); spin_unlock(&lan966x->mac_lock);
@ -424,6 +468,7 @@ irqreturn_t lan966x_mac_irq_handler(struct lan966x *lan966x)
lan966x, ANA_MACTINDX); lan966x, ANA_MACTINDX);
while (1) { while (1) {
spin_lock(&lan966x->mac_lock);
lan_rmw(ANA_MACACCESS_MAC_TABLE_CMD_SET(MACACCESS_CMD_SYNC_GET_NEXT), lan_rmw(ANA_MACACCESS_MAC_TABLE_CMD_SET(MACACCESS_CMD_SYNC_GET_NEXT),
ANA_MACACCESS_MAC_TABLE_CMD, ANA_MACACCESS_MAC_TABLE_CMD,
lan966x, ANA_MACACCESS); lan966x, ANA_MACACCESS);
@ -447,12 +492,15 @@ irqreturn_t lan966x_mac_irq_handler(struct lan966x *lan966x)
stop = false; stop = false;
if (column == LAN966X_MAC_COLUMNS - 1 && if (column == LAN966X_MAC_COLUMNS - 1 &&
index == 0 && stop) index == 0 && stop) {
spin_unlock(&lan966x->mac_lock);
break; break;
}
entry[column].mach = lan_rd(lan966x, ANA_MACHDATA); entry[column].mach = lan_rd(lan966x, ANA_MACHDATA);
entry[column].macl = lan_rd(lan966x, ANA_MACLDATA); entry[column].macl = lan_rd(lan966x, ANA_MACLDATA);
entry[column].maca = lan_rd(lan966x, ANA_MACACCESS); entry[column].maca = lan_rd(lan966x, ANA_MACACCESS);
spin_unlock(&lan966x->mac_lock);
/* Once all the columns are read process them */ /* Once all the columns are read process them */
if (column == LAN966X_MAC_COLUMNS - 1) { if (column == LAN966X_MAC_COLUMNS - 1) {

View File

@ -474,7 +474,7 @@ nfp_fl_set_tun(struct nfp_app *app, struct nfp_fl_set_tun *set_tun,
set_tun->ttl = ip4_dst_hoplimit(&rt->dst); set_tun->ttl = ip4_dst_hoplimit(&rt->dst);
ip_rt_put(rt); ip_rt_put(rt);
} else { } else {
set_tun->ttl = net->ipv4.sysctl_ip_default_ttl; set_tun->ttl = READ_ONCE(net->ipv4.sysctl_ip_default_ttl);
} }
} }

View File

@ -298,6 +298,11 @@ static void get_arttime(struct mii_bus *mii, int intel_adhoc_addr,
*art_time = ns; *art_time = ns;
} }
static int stmmac_cross_ts_isr(struct stmmac_priv *priv)
{
return (readl(priv->ioaddr + GMAC_INT_STATUS) & GMAC_INT_TSIE);
}
static int intel_crosststamp(ktime_t *device, static int intel_crosststamp(ktime_t *device,
struct system_counterval_t *system, struct system_counterval_t *system,
void *ctx) void *ctx)
@ -313,8 +318,6 @@ static int intel_crosststamp(ktime_t *device,
u32 num_snapshot; u32 num_snapshot;
u32 gpio_value; u32 gpio_value;
u32 acr_value; u32 acr_value;
int ret;
u32 v;
int i; int i;
if (!boot_cpu_has(X86_FEATURE_ART)) if (!boot_cpu_has(X86_FEATURE_ART))
@ -328,6 +331,8 @@ static int intel_crosststamp(ktime_t *device,
if (priv->plat->ext_snapshot_en) if (priv->plat->ext_snapshot_en)
return -EBUSY; return -EBUSY;
priv->plat->int_snapshot_en = 1;
mutex_lock(&priv->aux_ts_lock); mutex_lock(&priv->aux_ts_lock);
/* Enable Internal snapshot trigger */ /* Enable Internal snapshot trigger */
acr_value = readl(ptpaddr + PTP_ACR); acr_value = readl(ptpaddr + PTP_ACR);
@ -347,6 +352,7 @@ static int intel_crosststamp(ktime_t *device,
break; break;
default: default:
mutex_unlock(&priv->aux_ts_lock); mutex_unlock(&priv->aux_ts_lock);
priv->plat->int_snapshot_en = 0;
return -EINVAL; return -EINVAL;
} }
writel(acr_value, ptpaddr + PTP_ACR); writel(acr_value, ptpaddr + PTP_ACR);
@ -368,13 +374,12 @@ static int intel_crosststamp(ktime_t *device,
gpio_value |= GMAC_GPO1; gpio_value |= GMAC_GPO1;
writel(gpio_value, ioaddr + GMAC_GPIO_STATUS); writel(gpio_value, ioaddr + GMAC_GPIO_STATUS);
/* Poll for time sync operation done */ /* Time sync done Indication - Interrupt method */
ret = readl_poll_timeout(priv->ioaddr + GMAC_INT_STATUS, v, if (!wait_event_interruptible_timeout(priv->tstamp_busy_wait,
(v & GMAC_INT_TSIE), 100, 10000); stmmac_cross_ts_isr(priv),
HZ / 100)) {
if (ret == -ETIMEDOUT) { priv->plat->int_snapshot_en = 0;
pr_err("%s: Wait for time sync operation timeout\n", __func__); return -ETIMEDOUT;
return ret;
} }
num_snapshot = (readl(ioaddr + GMAC_TIMESTAMP_STATUS) & num_snapshot = (readl(ioaddr + GMAC_TIMESTAMP_STATUS) &
@ -392,6 +397,7 @@ static int intel_crosststamp(ktime_t *device,
} }
system->cycles *= intel_priv->crossts_adj; system->cycles *= intel_priv->crossts_adj;
priv->plat->int_snapshot_en = 0;
return 0; return 0;
} }
@ -576,6 +582,7 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
plat->has_crossts = true; plat->has_crossts = true;
plat->crosststamp = intel_crosststamp; plat->crosststamp = intel_crosststamp;
plat->int_snapshot_en = 0;
/* Setup MSI vector offset specific to Intel mGbE controller */ /* Setup MSI vector offset specific to Intel mGbE controller */
plat->msi_mac_vec = 29; plat->msi_mac_vec = 29;

View File

@ -576,32 +576,7 @@ static int mediatek_dwmac_init(struct platform_device *pdev, void *priv)
} }
} }
ret = clk_bulk_prepare_enable(variant->num_clks, plat->clks);
if (ret) {
dev_err(plat->dev, "failed to enable clks, err = %d\n", ret);
return ret;
}
ret = clk_prepare_enable(plat->rmii_internal_clk);
if (ret) {
dev_err(plat->dev, "failed to enable rmii internal clk, err = %d\n", ret);
goto err_clk;
}
return 0; return 0;
err_clk:
clk_bulk_disable_unprepare(variant->num_clks, plat->clks);
return ret;
}
static void mediatek_dwmac_exit(struct platform_device *pdev, void *priv)
{
struct mediatek_dwmac_plat_data *plat = priv;
const struct mediatek_dwmac_variant *variant = plat->variant;
clk_disable_unprepare(plat->rmii_internal_clk);
clk_bulk_disable_unprepare(variant->num_clks, plat->clks);
} }
static int mediatek_dwmac_clks_config(void *priv, bool enabled) static int mediatek_dwmac_clks_config(void *priv, bool enabled)
@ -643,7 +618,6 @@ static int mediatek_dwmac_common_data(struct platform_device *pdev,
plat->addr64 = priv_plat->variant->dma_bit_mask; plat->addr64 = priv_plat->variant->dma_bit_mask;
plat->bsp_priv = priv_plat; plat->bsp_priv = priv_plat;
plat->init = mediatek_dwmac_init; plat->init = mediatek_dwmac_init;
plat->exit = mediatek_dwmac_exit;
plat->clks_config = mediatek_dwmac_clks_config; plat->clks_config = mediatek_dwmac_clks_config;
if (priv_plat->variant->dwmac_fix_mac_speed) if (priv_plat->variant->dwmac_fix_mac_speed)
plat->fix_mac_speed = priv_plat->variant->dwmac_fix_mac_speed; plat->fix_mac_speed = priv_plat->variant->dwmac_fix_mac_speed;
@ -712,13 +686,32 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
mediatek_dwmac_common_data(pdev, plat_dat, priv_plat); mediatek_dwmac_common_data(pdev, plat_dat, priv_plat);
mediatek_dwmac_init(pdev, priv_plat); mediatek_dwmac_init(pdev, priv_plat);
ret = mediatek_dwmac_clks_config(priv_plat, true);
if (ret)
return ret;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret) { if (ret) {
stmmac_remove_config_dt(pdev, plat_dat); stmmac_remove_config_dt(pdev, plat_dat);
return ret; goto err_drv_probe;
} }
return 0; return 0;
err_drv_probe:
mediatek_dwmac_clks_config(priv_plat, false);
return ret;
}
static int mediatek_dwmac_remove(struct platform_device *pdev)
{
struct mediatek_dwmac_plat_data *priv_plat = get_stmmac_bsp_priv(&pdev->dev);
int ret;
ret = stmmac_pltfr_remove(pdev);
mediatek_dwmac_clks_config(priv_plat, false);
return ret;
} }
static const struct of_device_id mediatek_dwmac_match[] = { static const struct of_device_id mediatek_dwmac_match[] = {
@ -733,7 +726,7 @@ MODULE_DEVICE_TABLE(of, mediatek_dwmac_match);
static struct platform_driver mediatek_dwmac_driver = { static struct platform_driver mediatek_dwmac_driver = {
.probe = mediatek_dwmac_probe, .probe = mediatek_dwmac_probe,
.remove = stmmac_pltfr_remove, .remove = mediatek_dwmac_remove,
.driver = { .driver = {
.name = "dwmac-mediatek", .name = "dwmac-mediatek",
.pm = &stmmac_pltfr_pm_ops, .pm = &stmmac_pltfr_pm_ops,

View File

@ -150,7 +150,8 @@
#define GMAC_PCS_IRQ_DEFAULT (GMAC_INT_RGSMIIS | GMAC_INT_PCS_LINK | \ #define GMAC_PCS_IRQ_DEFAULT (GMAC_INT_RGSMIIS | GMAC_INT_PCS_LINK | \
GMAC_INT_PCS_ANE) GMAC_INT_PCS_ANE)
#define GMAC_INT_DEFAULT_ENABLE (GMAC_INT_PMT_EN | GMAC_INT_LPI_EN) #define GMAC_INT_DEFAULT_ENABLE (GMAC_INT_PMT_EN | GMAC_INT_LPI_EN | \
GMAC_INT_TSIE)
enum dwmac4_irq_status { enum dwmac4_irq_status {
time_stamp_irq = 0x00001000, time_stamp_irq = 0x00001000,

View File

@ -23,6 +23,7 @@
static void dwmac4_core_init(struct mac_device_info *hw, static void dwmac4_core_init(struct mac_device_info *hw,
struct net_device *dev) struct net_device *dev)
{ {
struct stmmac_priv *priv = netdev_priv(dev);
void __iomem *ioaddr = hw->pcsr; void __iomem *ioaddr = hw->pcsr;
u32 value = readl(ioaddr + GMAC_CONFIG); u32 value = readl(ioaddr + GMAC_CONFIG);
@ -58,6 +59,9 @@ static void dwmac4_core_init(struct mac_device_info *hw,
value |= GMAC_INT_FPE_EN; value |= GMAC_INT_FPE_EN;
writel(value, ioaddr + GMAC_INT_EN); writel(value, ioaddr + GMAC_INT_EN);
if (GMAC_INT_DEFAULT_ENABLE & GMAC_INT_TSIE)
init_waitqueue_head(&priv->tstamp_busy_wait);
} }
static void dwmac4_rx_queue_enable(struct mac_device_info *hw, static void dwmac4_rx_queue_enable(struct mac_device_info *hw,
@ -219,6 +223,9 @@ static void dwmac4_map_mtl_dma(struct mac_device_info *hw, u32 queue, u32 chan)
if (queue == 0 || queue == 4) { if (queue == 0 || queue == 4) {
value &= ~MTL_RXQ_DMA_Q04MDMACH_MASK; value &= ~MTL_RXQ_DMA_Q04MDMACH_MASK;
value |= MTL_RXQ_DMA_Q04MDMACH(chan); value |= MTL_RXQ_DMA_Q04MDMACH(chan);
} else if (queue > 4) {
value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue - 4);
value |= MTL_RXQ_DMA_QXMDMACH(chan, queue - 4);
} else { } else {
value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue); value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue);
value |= MTL_RXQ_DMA_QXMDMACH(chan, queue); value |= MTL_RXQ_DMA_QXMDMACH(chan, queue);

Some files were not shown because too many files have changed in this diff Show More