The SIMATIC IPC227E uses the PMC clock for on-board components and gets
stuck during boot if the clock is disabled. Therefore, add this device
to the critical systems list.
Fixes: 648e921888 ("clk: x86: Stop marking clocks as CLK_IS_CRITICAL")
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Pull dmaengine fixes from Vinod Koul:
"Some late fixes for drivers:
- memory leak in ti crossbar dma driver
- cleanup of omap dma probe
- Fix for link list configuration in sprd dma driver
- Handling fixed for DMACHCLR if iommu is mapped in rcar dma"
* tag 'dmaengine-fix-5.3' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: rcar-dmac: Fix DMACHCLR handling if iommu is mapped
dmaengine: sprd: Fix the DMA link-list configuration
dmaengine: ti: omap-dma: Add cleanup in omap_dma_probe()
dmaengine: ti: dma-crossbar: Fix a memory leak bug
Prior to this commit, removing the intel_pmc_core_pltdrv module
would cause the following warning:
Device 'intel_pmc_core.0' does not have a release() function, it is broken and must be fixed. See Documentation/kobject.txt.
WARNING: CPU: 0 PID: 2202 at drivers/base/core.c:1238 device_release+0x6f/0x80
This commit hence adds an empty release function for the driver.
Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
On a Xen-based PVH virtual machine with more than 4 GiB of RAM,
intel_pmc_core fails initialization with the following warning message
from the kernel, indicating that the driver is attempting to ioremap
RAM:
ioremap on RAM at 0x00000000fe000000 - 0x00000000fe001fff
WARNING: CPU: 1 PID: 434 at arch/x86/mm/ioremap.c:186 __ioremap_caller.constprop.0+0x2aa/0x2c0
...
Call Trace:
? pmc_core_probe+0x87/0x2d0 [intel_pmc_core]
pmc_core_probe+0x87/0x2d0 [intel_pmc_core]
This issue appears to manifest itself because of the following fallback
mechanism in the driver:
if (lpit_read_residency_count_address(&slp_s0_addr))
pmcdev->base_addr = PMC_BASE_ADDR_DEFAULT;
The validity of address PMC_BASE_ADDR_DEFAULT (i.e., 0xFE000000) is not
verified by the driver, which is what this patch introduces. With this
patch, if address PMC_BASE_ADDR_DEFAULT is in RAM, then the driver will
not attempt to ioremap the aforementioned address.
Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Jakub Kicinski says:
====================
net/tls: small TX offload optimizations
This set brings small TLS TX device optimizations. The biggest
gain comes from fixing a misuse of non temporal copy instructions.
On a synthetic workload modelled after customer's RFC application
I see 3-5% percent gain.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Unlike normal TCP code TLS has to touch the cache lines
it copies into to fill header info. On memory-heavy workloads
having non temporal stores and normal accesses targeting
the same cache line leads to significant overhead.
Measured 3% overhead running 3600 round robin connections
with additional memory heavy workload.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For TLS device offload the tag/message authentication code are
filled in by the device. The kernel merely reserves space for
them. Because device overwrites it, the contents of the tag make
do no matter. Current code tries to save space by reusing the
header as the tag. This, however, leads to an additional frag
being created and defeats buffer coalescing (which trickles
all the way down to the drivers).
Remove this optimization, and try to allocate the space for
the tag in the usual way, leave the memory uninitialized.
If memory allocation fails rewind the record pointer so that
we use the already copied user data as tag.
Note that the optimization was actually buggy, as the tag
for TLS 1.2 is 16 bytes, but header is just 13, so the reuse
may had looked past the end of the page..
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
All modifications to TLS record list happen under the socket
lock. Since records form an ordered queue readers are only
concerned about elements being removed, additions can happen
concurrently.
Use RCU primitives to ensure the correct access types
(READ_ONCE/WRITE_ONCE).
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Johan Hedberg says:
====================
pull request: bluetooth-next 2019-09-06
Here's the main bluetooth-next pull request for the 5.4 kernel.
- Cleanups & fixes to btrtl driver
- Fixes for Realtek devices in btusb, e.g. for suspend handling
- Firmware loading support for BCM4345C5
- hidp_send_message() return value handling fixes
- Added support for utilizing Fast Advertising Interval
- Various other minor cleanups & fixes
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Flower control message replies are handled in different locations. The truly
high priority replies are handled in the BH (tasklet) context, while the
remaining replies are handled in a predefined Linux work queue. The work
queue handler orders replies into high and low priority groups, and always
start servicing the high priority replies within the received batch first.
Reply Type: Rtnl Lock: Handler:
CMSG_TYPE_PORT_MOD no BH tasklet (mtu)
CMSG_TYPE_TUN_NEIGH no BH tasklet
CMSG_TYPE_FLOW_STATS no BH tasklet
CMSG_TYPE_PORT_REIFY no WQ high
CMSG_TYPE_PORT_MOD yes WQ high (link/mtu)
CMSG_TYPE_MERGE_HINT yes WQ low
CMSG_TYPE_NO_NEIGH no WQ low
CMSG_TYPE_ACTIVE_TUNS no WQ low
CMSG_TYPE_QOS_STATS no WQ low
CMSG_TYPE_LAG_CONFIG no WQ low
A subset of control messages can block waiting for an rtnl lock (from both
work queue priority groups). The rtnl lock is heavily contended for by
external processes such as systemd-udevd, systemd-network and libvirtd,
especially during netdev creation, such as when flower VFs and representors
are instantiated.
Kernel netlink instrumentation shows that external processes (such as
systemd-udevd) often use successive rtnl_trylock() sequences, which can result
in an rtnl_lock() blocked control message to starve for longer periods of time
during rtnl lock contention, i.e. netdev creation.
In the current design a single blocked control message will block the entire
work queue (both priorities), and introduce a latency which is
nondeterministic and dependent on system wide rtnl lock usage.
In some extreme cases, one blocked control message at exactly the wrong time,
just before the maximum number of VFs are instantiated, can block the work
queue for long enough to prevent VF representor REIFY replies from getting
handled in time for the 40ms timeout.
The firmware will deliver the total maximum number of REIFY message replies in
around 300us.
Only REIFY and MTU update messages require replies within a timeout period (of
40ms). The MTU-only updates are already done directly in the BH (tasklet)
handler.
Move the REIFY handler down into the BH (tasklet) in order to resolve timeouts
caused by a blocked work queue waiting on rtnl locks.
Signed-off-by: Fred Lotter <frederik.lotter@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't populate the array spec_opcode on the stack but instead make it
static const. Makes the object code smaller by 48 bytes.
Before:
text data bss dec hex filename
6914 1040 128 8082 1f92 hns3/hns3vf/hclgevf_cmd.o
After:
text data bss dec hex filename
6866 1040 128 8034 1f62 hns3/hns3vf/hclgevf_cmd.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't populate the arrays on the stack but instead make them
static const. Makes the object code smaller by 281 bytes.
Before:
text data bss dec hex filename
87553 5672 0 93225 16c29 benet/be_cmds.o
After:
text data bss dec hex filename
87112 5832 0 92944 16b10 benet/be_cmds.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Historically, support for frag_list packets entering skb_segment() was
limited to frag_list members terminating on exact same gso_size
boundaries. This is verified with a BUG_ON since commit 89319d3801
("net: Add frag_list support to skb_segment"), quote:
As such we require all frag_list members terminate on exact MSS
boundaries. This is checked using BUG_ON.
As there should only be one producer in the kernel of such packets,
namely GRO, this requirement should not be difficult to maintain.
However, since commit 6578171a7f ("bpf: add bpf_skb_change_proto helper"),
the "exact MSS boundaries" assumption no longer holds:
An eBPF program using bpf_skb_change_proto() DOES modify 'gso_size', but
leaves the frag_list members as originally merged by GRO with the
original 'gso_size'. Example of such programs are bpf-based NAT46 or
NAT64.
This lead to a kernel BUG_ON for flows involving:
- GRO generating a frag_list skb
- bpf program performing bpf_skb_change_proto() or bpf_skb_adjust_room()
- skb_segment() of the skb
See example BUG_ON reports in [0].
In commit 13acc94eff ("net: permit skb_segment on head_frag frag_list skb"),
skb_segment() was modified to support the "gso_size mangling" case of
a frag_list GRO'ed skb, but *only* for frag_list members having
head_frag==true (having a page-fragment head).
Alas, GRO packets having frag_list members with a linear kmalloced head
(head_frag==false) still hit the BUG_ON.
This commit adds support to skb_segment() for a 'head_skb' packet having
a frag_list whose members are *non* head_frag, with gso_size mangled, by
disabling SG and thus falling-back to copying the data from the given
'head_skb' into the generated segmented skbs - as suggested by Willem de
Bruijn [1].
Since this approach involves the penalty of skb_copy_and_csum_bits()
when building the segments, care was taken in order to enable this
solution only when required:
- untrusted gso_size, by testing SKB_GSO_DODGY is set
(SKB_GSO_DODGY is set by any gso_size mangling functions in
net/core/filter.c)
- the frag_list is non empty, its item is a non head_frag, *and* the
headlen of the given 'head_skb' does not match the gso_size.
[0]
https://lore.kernel.org/netdev/20190826170724.25ff616f@pixies/https://lore.kernel.org/netdev/9265b93f-253d-6b8c-f2b8-4b54eff1835c@fb.com/
[1]
https://lore.kernel.org/netdev/CA+FuTSfVsgNDi7c=GUU8nMg2hWxF2SjCNLXetHeVPdnxAW5K-w@mail.gmail.com/
Fixes: 6578171a7f ("bpf: add bpf_skb_change_proto helper")
Suggested-by: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu says:
====================
net: stmmac: Improvements and fixes for -next
Improvements and fixes for recently introduced features. All for -next tree.
More info in commit logs.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
We may have some SoCs that can't achieve XGMAC max speed. Limit it if
asked to.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a test to validate that Split Header feature is working correctly.
It works by using the rececently introduced counter that increments each
time a packet with split header is received.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We are already doing it by default in the TX path so we can also enable
Jumbo Frame support in the RX path independently of MTU value.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We need to set the RX tail pointer so that RX engine starts working
again after finishing the Flow Control test.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add checks for support of Source Address Insertion/Replacement before
running the test.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is a re-post of previous patch wrote by David Miller[1].
Phil Karn reported[2] that on busy networks with lots of unresolved
multicast routing entries, the creation of new multicast group routes
can be extremely slow and unreliable.
The reason is we hard-coded multicast route entries with unresolved source
addresses(cache_resolve_queue_len) to 10. If some multicast route never
resolves and the unresolved source addresses increased, there will
be no ability to create new multicast route cache.
To resolve this issue, we need either add a sysctl entry to make the
cache_resolve_queue_len configurable, or just remove cache_resolve_queue_len
limit directly, as we already have the socket receive queue limits of mrouted
socket, pointed by David.
>From my side, I'd perfer to remove the cache_resolve_queue_len limit instead
of creating two more(IPv4 and IPv6 version) sysctl entry.
[1] https://lkml.org/lkml/2018/7/22/11
[2] https://lkml.org/lkml/2018/7/21/343
v3: instead of remove cache_resolve_queue_len totally, let's only remove
the hard code limit when allocate the unresolved cache, as Eric Dumazet
suggested, so we don't need to re-count it in other places.
v2: hold the mfc_unres_lock while walking the unresolved list in
queue_count(), as Nikolay Aleksandrov remind.
Reported-by: Phil Karn <karn@ka9q.net>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
syzbot reported:
BUG: KMSAN: uninit-value in capi_write+0x791/0xa90 drivers/isdn/capi/capi.c:700
CPU: 0 PID: 10025 Comm: syz-executor379 Not tainted 4.20.0-rc7+ #2
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x173/0x1d0 lib/dump_stack.c:113
kmsan_report+0x12e/0x2a0 mm/kmsan/kmsan.c:613
__msan_warning+0x82/0xf0 mm/kmsan/kmsan_instr.c:313
capi_write+0x791/0xa90 drivers/isdn/capi/capi.c:700
do_loop_readv_writev fs/read_write.c:703 [inline]
do_iter_write+0x83e/0xd80 fs/read_write.c:961
vfs_writev fs/read_write.c:1004 [inline]
do_writev+0x397/0x840 fs/read_write.c:1039
__do_sys_writev fs/read_write.c:1112 [inline]
__se_sys_writev+0x9b/0xb0 fs/read_write.c:1109
__x64_sys_writev+0x4a/0x70 fs/read_write.c:1109
do_syscall_64+0xbc/0xf0 arch/x86/entry/common.c:291
entry_SYSCALL_64_after_hwframe+0x63/0xe7
[...]
The problem is that capi_write() is reading past the end of the message.
Fix it by checking the message's length in the needed places.
Reported-and-tested-by: syzbot+0849c524d9c634f5ae66@syzkaller.appspotmail.com
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang says:
====================
hv_netvsc: Enable sg as tunable, sync offload settings to VF NIC
This patch set fixes an issue in SG tuning, and sync
offload settings from synthetic NIC to VF NIC.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
VF NIC may go down then come up during host servicing events. This
causes the VF NIC offloading feature settings to roll back to the
defaults. This patch can synchronize features from synthetic NIC to
the VF NIC during ndo_set_features (ethtool -K),
and netvsc_register_vf when VF comes back after host events.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Mark Bloch <markb@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In a previous patch, the NETIF_F_SG was missing after the code changes.
That caused the SG feature to be "fixed". This patch includes it into
hw_features, so it is tunable again.
Fixes: 23312a3be9 ("netvsc: negotiate checksum and segmentation parameters")
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Saeed Mahameed says:
====================
mlx5-updates-2019-09-05
1) Allover mlx5 cleanups
2) Added port congestion counters to ethtool stats:
Add 3 counters per priority to ethtool using PPCNT:
2.1) rx_prio[p]_buf_discard - the number of packets discarded by device
due to lack of per host receive buffers
2.2) rx_prio[p]_cong_discard - the number of packets discarded by device
due to per host congestion
2.3) rx_prio[p]_marked - the number of packets ECN marked by device due
to per host congestion
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 36f1031c51 ("ibmvnic: Do not process reset during or after
device removal") made the change to exit reset if the driver has been
removed, but does not free reset work items of the adapter from queue.
Ensure all reset work items are freed when breaking out of the loop early.
Fixes: 36f1031c51 ("ibmnvic: Do not process reset during or after device removal”)
Signed-off-by: Juliet Kim <julietk@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
tcp_diag_get_aux_size() can be called with sockets in any state.
icsk_ulp_ops is only present for full sockets.
For SYN_RECV or TIME_WAIT ones we would access garbage.
Fixes: 61723b3932 ("tcp: ulp: add functions to dump ulp-specific information")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Luke Hsiao <lukehsiao@google.com>
Reported-by: Neal Cardwell <ncardwell@google.com>
Cc: Davide Caratti <dcaratti@redhat.com>
Acked-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need for fib_notifier_ops to be in struct net. It is used only by
fib_notifier as a private data. Use net_generic to introduce per-net
fib_notifier struct and move fib_notifier_ops there.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Regarding to IEEE 802.3-2015 standard section 2
28B.3 Priority resolution - Table 28-3 - Pause resolution
In case of Local device Pause=1 AsymDir=0, Link partner
Pause=1 AsymDir=1, Local device resolution should be enable PAUSE
transmit, disable PAUSE receive.
And in case of Local device Pause=1 AsymDir=1, Link partner
Pause=1 AsymDir=0, Local device resolution should be enable PAUSE
receive, disable PAUSE transmit.
Fixes: 9525ae8395 ("phylink: add phylink infrastructure")
Signed-off-by: Stefan Chulski <stefanc@marvell.com>
Reported-by: Shaul Ben-Mayor <shaulb@marvell.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The kernel.h macro DIV_ROUND_CLOSEST performs the computation (x + d/2)/d
but is perhaps more readable.
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
chrome-platform fixes for v5.3-rc6
Fixes:
1. platform/chrome: cros_ec_ishtp: fix crash during suspend
- Fixes a kernel crash during suspend/resume of cros_ec_ishtp
Signed-off-by: Benson Leung <bleung@chromium.org>
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for net-next:
1) Add nft_reg_store64() and nft_reg_load64() helpers, from Ander Juaristi.
2) Time matching support, also from Ander Juaristi.
3) VLAN support for nfnetlink_log, from Michael Braun.
4) Support for set element deletions from the packet path, also from Ander.
5) Remove __read_mostly from conntrack spinlock, from Li RongQing.
6) Support for updating stateful objects, this also includes the initial
client for this infrastructure: the quota extension. A follow up fix
for the control plane also comes in this batch. Patches from
Fernando Fernandez Mancera.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
We 'allocate' 'count' bytes here. In fact, 'dev_alloc_skb' already add some
extra space for padding, so a bit more is allocated.
However, we use 1 byte for the KISS command, then copy 'count' bytes, so
count+1 bytes.
Explicitly allocate and use 1 more byte to be safe.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
100GbE Intel Wired LAN Driver Updates 2019-09-05
This series contains updates to ice driver.
Brett fixes the setting of num_q_vectors by using the maximum number
between the allocated transmit and receive queues.
Anirudh simplifies the code to use a helper function to return the main
VSI, which is the first element in the pf->vsi array. Adds a pointer
check to prevent a NULL pointer dereference. Adds a check to ensure we
do not initialize DCB on devices that are not DCB capable. Does some
housekeeping on the code to remove unnecessary indirection and reduce
the PF structure by removing elements that are not needed since the
values they were storing can be readily gotten from
ice_get_avail_*_count()'s. Updates the printed strings to make it
easier to search the logs for driver capabilities.
Jesse cleans up unnecessary function arguments. Updated the code to use
prefetch() to add some efficiency to the driver to avoid a cache miss.
Did some housekeeping on the code to remove the configurable transmit
work limit via ethtool which ended up creating performance overhead.
Made additional performance enhancements by updating the driver to start
out with a reasonable number of descriptors by changing the default to
2048.
Mitch fixes the reset logic for VFs by clearing VF_MBX_ARQLEN register
when the source of the reset is not PFR.
Lukasz updates the driver to include a similar fix for the i40e driver
by reporting link down for VF's when the PF queues are not enabled.
Akeem updates the driver to report the VF link status once we get VF
resources so that we can reflect the link status similarly to how the PF
reports link speed.
Ashish updates the transmit context structure based on recent changes to
the hardware specification.
Dave updates the DCB logic to allow a delayed registration for MIB
change events so that the driver is not accepting events before it is
ready for them.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In some systems, the Device/Port Type in the PCI Express Capabilities
register incorrectly identifies upstream ports as downstream ports.
d0751b98df ("PCI: Add dev->has_secondary_link to track downstream PCIe
links") addressed this by adding pci_dev.has_secondary_link, which is set
for downstream ports. But this is confusing because pci_pcie_type()
sometimes gives the wrong answer, and it's not obvious that we should use
pci_dev.has_secondary_link instead.
Reduce the confusion by correcting the type of the port itself so that
pci_pcie_type() returns the actual type regardless of what the Device/Port
Type register claims it is. Update the users to call pci_pcie_type() and
pcie_downstream_port() accordingly, and remove pci_dev.has_secondary_link
completely.
Link: https://lore.kernel.org/linux-pci/20190703133953.GK128603@google.com/
Suggested-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://lore.kernel.org/r/20190822085553.62697-2-mika.westerberg@linux.intel.com
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Kalle Valo says:
====================
wireless-drivers-next patches for 5.4
Second set of patches for 5.4. Lots of changes for iwlwifi and mt76,
but also smaller changes to other drivers.
Major changes:
iwlwifi
* remove broken and unused runtime power management mode for PCIe
devices, removes IWLWIFI_PCIE_RTPM Kconfig option as well
* support new ACPI value for per-platform antenna gain
* support for single antenna diversity
* support for new WoWLAN FW API
brcmfmac
* add reset debugfs file for testing firmware restart
mt76
* DFS pattern detector for mt7615 (DFS channels not enabled yet)
* Channel Switch Announcement (CSA) support for mt7615
* new device support for mt76x0
* support for more ciphers in mt7615
* smart carrier sense on mt7615
* survey support on mt7615
* multiple interfaces on mt76x02u
rtw88
* enable MSI interrupt
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov says:
====================
pull-request: bpf 2019-09-06
The following pull-request contains BPF updates for your *net* tree.
The main changes are:
1) verifier precision tracking fix, from Alexei.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The infrastructure to mock core libnvdimm routines for unit testing
purposes is prone to bitrot relative to refactoring of that core. Arrange
for the unit test core to be built when CONFIG_COMPILE_TEST=y. This does
not result in a functional unit test environment, it is only a helper for
0day to catch unit test build regressions.
Note that there are a few x86isms in the implementation, so this does not
bother compile testing this architectures other than 64-bit x86.
Link: https://lore.kernel.org/r/156763690875.2556198.15786177395425033830.stgit@dwillia2-desk3.amr.corp.intel.com
Reported-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
We need to make sure implementations don't cheat and don't have a possible
schedule/blocking point deeply burried where review can't catch it.
I'm not sure whether this is the best way to make sure all the
might_sleep() callsites trigger, and it's a bit ugly in the code flow.
But it gets the job done.
Inspired by an i915 patch series which did exactly that, because the rules
haven't been entirely clear to us.
Link: https://lore.kernel.org/r/20190826201425.17547-5-daniel.vetter@ffwll.ch
Reviewed-by: Christian König <christian.koenig@amd.com> (v1)
Reviewed-by: Jérôme Glisse <jglisse@redhat.com> (v4)
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
In some special cases we must not block, but there's not a spinlock,
preempt-off, irqs-off or similar critical section already that arms the
might_sleep() debug checks. Add a non_block_start/end() pair to annotate
these.
This will be used in the oom paths of mmu-notifiers, where blocking is not
allowed to make sure there's forward progress. Quoting Michal:
"The notifier is called from quite a restricted context - oom_reaper -
which shouldn't depend on any locks or sleepable conditionals. The code
should be swift as well but we mostly do care about it to make a forward
progress. Checking for sleepable context is the best thing we could come
up with that would describe these demands at least partially."
Peter also asked whether we want to catch spinlocks on top, but Michal
said those are less of a problem because spinlocks can't have an indirect
dependency upon the page allocator and hence close the loop with the oom
reaper.
Suggested by Michal Hocko.
Link: https://lore.kernel.org/r/20190826201425.17547-4-daniel.vetter@ffwll.ch
Acked-by: Christian König <christian.koenig@amd.com> (v1)
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
This check was accidently deleted in the below commit. There are cases
where the driver will call unregister even though it hasn't registered
anything.
CPU 0 Unable to handle kernel paging request at virtual address 0000001c, epc == 808de6d4, ra == 804d32ec
Call Trace:
[<808de6d4>] mutex_lock+0x8/0x44
[<804d32ec>] radeon_mn_unregister+0x3c/0xb0
[<8041583c>] radeon_gem_object_free+0x18/0x2c
[<803a451c>] drm_gem_object_release_handle+0x74/0xac
[<803a45d0>] drm_gem_handle_delete+0x7c/0x128
[<803a5bf4>] drm_ioctl_kernel+0xb0/0x108
[<803a5e74>] drm_ioctl+0x200/0x3a8
[<803e07b4>] radeon_drm_ioctl+0x54/0xc0
[<801214dc>] do_vfs_ioctl+0x4e8/0x81c
[<80121864>] ksys_ioctl+0x54/0xb0
[<8001100c>] syscall_common+0x34/0x58
Link: https://lore.kernel.org/r/2fc7ef14-e89a-1f2d-381d-1c9b05da02d3@gmail.com
Fixes: 534e5f84b7 ("drm/radeon: use mmu_notifier_get/put for struct radeon_mn")
Reported-by: Petr Cvek <petrcvekcz@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
As an earlier patch made the macro argument more complicated, compilation
now fails with:
In file included from mm/madvise.c:30:
mm/madvise.c: In function 'madvise_free_single_vma':
arch/csky/include/asm/tlb.h:11:11: error:
invalid type argument of '->' (have 'struct mmu_gather')
Link: https://lore.kernel.org/r/20190901193601.GB5208@mellanox.com
Fixes: 923bfc561e75 ("pagewalk: separate function pointers from iterator data")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>