Commit Graph

967235 Commits

Author SHA1 Message Date
Heiner Kallweit
682036b2b9 net: remove ip_tunnel_get_stats64
After having migrated all users remove ip_tunnel_get_stats64().

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
98d7fc4638 ipv4/ipv6: switch to dev_get_tstats64
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
8f3feb2420 vti: switch to dev_get_tstats64
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
42f9e5f0c6 wireguard: switch to dev_get_tstats64
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().

Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
250f19c751 gtp: switch to dev_get_tstats64
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().

Acked-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
b220a4a79c net: switch to dev_get_tstats64
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
6b840a04fe ip6_tunnel: use ip_tunnel_get_stats64 as ndo_get_stats64 callback
Switch ip6_tunnel to the standard statistics pattern:
- use dev->stats for the less frequently accessed counters
- use dev->tstats for the frequently accessed counters

An additional benefit is that we now have 64bit statistics also on
32bit systems.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
497a5757ce tun: switch to net core provided statistics counters
Switch tun to the standard statistics pattern:
- use netdev->stats for the less frequently accessed counters
- use netdev->tstats for the frequently accessed per-cpu counters

v3:
- add atomic_long_t member rx_frame_errors for making counter updates
  atomic

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:28 -08:00
Heiner Kallweit
6a90062879 net: dsa: use net core stats64 handling
Use netdev->tstats instead of a member of dsa_slave_priv for storing
a pointer to the per-cpu counters. This allows us to use core
functionality for statistics handling.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:27 -08:00
Heiner Kallweit
a18394269f net: core: add dev_get_tstats64 as a ndo_get_stats64 implementation
It's a frequent pattern to use netdev->stats for the less frequently
accessed counters and per-cpu counters for the frequently accessed
counters (rx/tx bytes/packets). Add a default ndo_get_stats64()
implementation for this use case.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:50:27 -08:00
Tobias Waldekranz
ca4d632aef net: dsa: mv88e6xxx: Export VTU as devlink region
Export the raw VTU data and related registers in a devlink region so
that it can be inspected from userspace and compared to the current
bridge configuration.

Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20201109082927.8684-1-tobias@waldekranz.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:43:22 -08:00
Jisheng Zhang
8b7e0a01df net: phy: microchip_t1: Don't set .config_aneg
The .config_aneg in microchip_t1 is genphy_config_aneg, so it's not
needed, because the phy core will call genphy_config_aneg() if the
.config_aneg is NULL.

Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20201109091605.3951c969@xhacker.debian
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:41:17 -08:00
Kaixu Xia
785d21b826 net/mlx4: Assign boolean values to a bool variable
Fix the following coccinelle warnings:

./drivers/net/ethernet/mellanox/mlx4/en_rx.c:687:1-17: WARNING: Assignment of 0/1 to bool variable

Reported-by: Tosk Robot <tencent_os_robot@tencent.com>
Signed-off-by: Kaixu Xia <kaixuxia@tencent.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/1604732038-6057-1-git-send-email-kaixuxia@tencent.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 17:37:53 -08:00
Menglong Dong
6e822c2c29 net: udp: remove redundant initialization in udp_dump_one
The initialization for 'err' with '-EINVAL' is redundant and
can be removed, as it is updated soon and not used.

Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Link: https://lore.kernel.org/r/1604644960-48378-2-git-send-email-dong.menglong@zte.com.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 16:42:49 -08:00
Menglong Dong
cffb8f6177 net: udp: remove redundant initialization in udp_send_skb
The initialization for 'err' with 0 is redundant and can be removed,
as it is updated by ip_send_skb and not used before that.

Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Link: https://lore.kernel.org/r/1604644960-48378-4-git-send-email-dong.menglong@zte.com.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 16:42:34 -08:00
Horatiu Vultur
0169b82054 bridge: mrp: Use hlist_head instead of list_head for mrp
Replace list_head with hlist_head for MRP list under the bridge.
There is no need for a circular list when a linear list will work.
This will also decrease the size of 'struct net_bridge'.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Link: https://lore.kernel.org/r/20201106215049.1448185-1-horatiu.vultur@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 16:42:12 -08:00
Jakub Kicinski
084d0c13a4 Merge branch 'net-packet-make-packet_fanout-arr-size-configurable-up-to-64k'
Tanner Love says:

====================
net/packet: make packet_fanout.arr size configurable up to 64K

First patch makes the change; second patch adds unit tests.
====================

Link: https://lore.kernel.org/r/20201106180741.2839668-1-tannerlove.kernel@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 16:41:45 -08:00
Tanner Love
1db32acfde selftests/net: test max_num_members, fanout_args in psock_fanout
Add an additional control test that verifies:
-specifying two different max_num_members values fails
-specifying max_num_members > PACKET_FANOUT_MAX fails

In datapath tests, set max_num_members to PACKET_FANOUT_MAX.

Signed-off-by: Tanner Love <tannerlove@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 16:41:40 -08:00
Tanner Love
9c661b0b85 net/packet: make packet_fanout.arr size configurable up to 64K
One use case of PACKET_FANOUT is lockless reception with one socket
per CPU. 256 is a practical limit on increasingly many machines.

Increase PACKET_FANOUT_MAX to 64K. Expand setsockopt PACKET_FANOUT to
take an extra argument max_num_members. Also explicitly define a
fanout_args struct, instead of implicitly casting to an integer. This
documents the API and simplifies the control flow.

If max_num_members is not specified or is set to 0, then 256 is used,
same as before.

Signed-off-by: Tanner Love <tannerlove@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 16:41:40 -08:00
Menglong Dong
a3ce2b109a net: udp: introduce UDP_MIB_MEMERRORS for udp_mem
When udp_memory_allocated is at the limit, __udp_enqueue_schedule_skb
will return a -ENOBUFS, and skb will be dropped in __udp_queue_rcv_skb
without any counters being done. It's hard to find out what happened
once this happen.

So we introduce a UDP_MIB_MEMERRORS to do this job. Well, this change
looks friendly to the existing users, such as netstat:

$ netstat -u -s
Udp:
    0 packets received
    639 packets to unknown port received.
    158689 packet receive errors
    180022 packets sent
    RcvbufErrors: 20930
    MemErrors: 137759
UdpLite:
IpExt:
    InOctets: 257426235
    OutOctets: 257460598
    InNoECTPkts: 181177

v2:
- Fix some alignment problems

Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Link: https://lore.kernel.org/r/1604627354-43207-1-git-send-email-dong.menglong@zte.com.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09 15:34:44 -08:00
Voon Weifeng
bff6f1db91 stmmac: intel: change all EHL/TGL to auto detect phy addr
Set all EHL/TGL phy_addr to -1 so that the driver will automatically
detect it at run-time by probing all the possible 32 addresses.

Signed-off-by: Voon Weifeng <weifeng.voon@intel.com>
Signed-off-by: Wong Vee Khee <vee.khee.wong@intel.com>
Link: https://lore.kernel.org/r/20201106094341.4241-1-vee.khee.wong@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 16:11:54 -08:00
Wang Qing
ef9ac20911 net: usb: fix spelling typo in cdc_ncm.c
Actually, withing should be within.

Signed-off-by: Wang Qing <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1604649025-22559-1-git-send-email-wangqing@vivo.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:52:30 -08:00
Wang Qing
75a5fb0cdb net: core: fix spelling typo in flow_dissector.c
withing should be within.

Signed-off-by: Wang Qing <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1604650310-30432-1-git-send-email-wangqing@vivo.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:52:21 -08:00
Jakub Kicinski
2d152760a9 Merge branch 'net-ipa-constrain-gsi-interrupts'
Alex Elder says:

====================
net: ipa: constrain GSI interrupts

The goal of this series is to more tightly control when GSI
interrupts are enabled.  This is a long-ish series, so I'll
describe it in parts.

The first patch is actually unrelated...  I forgot to include
it in my previous series (which exposed the GSI layer to the
IPA version).  It is a trivial comments-only update patch.

The second patch defers registering the GSI interrupt handler
until *after* all of the resources that handler touches have
been initialized.  In practice, we don't see this interrupt
that early, but this precludes an obvious problem.

The next two patches are simple changes.  The first just
trivially renames a field.  The second switches from using
constant mask values to using an enumerated type of bit
positions to represent each GSI interrupt type.

The rest implement the "real work."  First, all interrupts
are disabled at initialization time.  Next, we keep track of
a bitmask of enabled GSI interrupt types, updating it each
time we enable or disable one of them.  From there we have
a set of patches that one-by-one enable each interrupt type
only during the period it is required.  This includes allowing
a channel to generate IEOB interrupts only when it has been
enabled.  And finally, the last patch simplifies some code
now that all GSI interrupt types are handled uniformly.
====================

Link: https://lore.kernel.org/r/20201105181407.8006-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:21 -08:00
Alex Elder
8194be79fb net: ipa: pass a value to gsi_irq_type_update()
Now that all of the GSI interrupts are handled uniformly,
change gsi_irq_type_update() so it takes a value.  Have the
function assign that value to the cached mask of enabled GSI
IRQ types before writing it to hardware.

Note that gsi_irq_teardown() will only be called after
gsi_irq_disable(), so it's not necessary for the former
to disable all IRQ types.  Get rid of that.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:17 -08:00
Alex Elder
352f26a886 net: ipa: only enable GSI general IRQs when needed
Most GSI general errors are unrecoverable without a full reset.
Despite that, we want to receive these errors so we can at least
report what happened before whatever undefined behavior ensues.

Explicitly disable all such interrupts in gsi_irq_setup(), then
enable those we want in gsi_irq_enable().  List the interrupt types
we are interested in (everything but breakpoint) explicitly rather
than using GSI_CNTXT_GSI_IRQ_ALL, and remove that symbol's
definition.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:17 -08:00
Alex Elder
46f748ccaf net: ipa: explicitly disallow inter-EE interrupts
It is possible for other execution environments (EEs, like the modem)
to request changes to local (AP) channel or event ring state.  We do
not support this feature.

In gsi_irq_setup(), explicitly zero the mask that defines which
channels are permitted to generate inter-EE channel state change
interrupts.  Do the same for the event ring mask.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:17 -08:00
Alex Elder
06c8632833 net: ipa: only enable GSI IEOB IRQs when needed
A GSI channel must be started in order to use it to perform a
transfer data (or command) transaction.  And the only time we'll see
an IEOB interrupt is if we send a transaction to a started channel.
Therefore we do not need to have the IEOB interrupt type enabled
until at least one channel has been started.  And once the last
started channel has been stopped, we can disable the IEOB interrupt
type again.

We already enable the IEOB interrupt for a particular channel only
when it is started.  Extend that by having the IEOB interrupt *type*
be enabled only when at least one channel is in STARTED state.

Disallow all channels from triggering the IEOB interrupt in
gsi_irq_setup().  We only enable an channel's interrupt when
needed, so there is no longer any need to zero the channel mask
in gsi_irq_disable().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:17 -08:00
Alex Elder
d6c9e3f506 net: ipa: only enable generic command completion IRQ when needed
The completion of a generic EE GSI command is signaled by a global
interrupt of type GP_INT1.  The only other used type for a global
interrupt is a hardware error report.

First, disallow all global interrupt types in gsi_irq_setup().  We
want to know about hardware errors, so re-enable the interrupt type
in gsi_irq_enable(), to allow hardware errors to be reported.
Disable that interrupt type again in gsi_irq_disable().

We only issue generic EE commands one at a time, and there's no
reason to keep the completion interrupt enabled when no generic
EE command is pending.  We furthermore have no need to enable the
GP_INT2 or GP_INT3 interrupt types (which aren't used).

The change in gsi_irq_enable() makes GSI_CNTXT_GLOB_IRQ_ALL unused,
so get rid of it.  Have gsi_generic_command() enable the GP_INT1
interrupt type (in addition to the ERROR_INT type) only while a
generic command is pending.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:17 -08:00
Alex Elder
b4175f8731 net: ipa: only enable GSI event control IRQs when needed
A GSI event ring causes an event control interrupt to fire whenever
its state changes (between NOT_ALLOCATED and ALLOCATED).  No event
ring should ever change state except when we request it to.

Currently, we permit *all* events rings to generate event control
interrupts--even those that are never used.  And we enable event
control interrupts essentially at all times, from setup to teardown.

Instead, only enable the event control interrupt type for the
duration of an event ring command, and when doing so, only allow
the event ring being operated upon to cause the interrupt to fire.
Disallow all event rings from issuing the event control interrupt
in gsi_irq_setup().

Because an event ring's interrupt is only enabled when needed,
there is no longer any need to zero the event channel mask in
gsi_irq_disable().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:17 -08:00
Alex Elder
b054d4f9eb net: ipa: only enable GSI channel control IRQs when needed
A GSI channel causes a channel control interrupt to fire whenever
its state changes (between NOT_ALLOCATED, ALLOCATED, STARTED, etc.).
We do not support inter-EE channel commands (initiated by other EEs),
so no channel should ever change state except when we request it to.

Currently, we permit *all* channels to generate channel control
interrupts--even those that are never used.  And we enable channel
control interrupts essentially at all times, from setup to teardown.

Instead, disable all channel control interrupts initially in
gsi_irq_setup(), and only enable the channel control interrupt
type for the duration of a channel command.  When doing so, only
allow the channel being operated upon to cause the interrupt to
fire.

Because a channel's interrupt is now enabled only when needed (one
channel at a time), there is no longer any need to zero the channel
mask in gsi_irq_disable().

Add new gsi_irq_type_enable() and gsi_irq_type_disable() as helper
functions to control whether a given GSI interrupt type is enabled.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Alex Elder
3ca97ffd98 net: ipa: cache last-saved GSI IRQ enabled type
Keep track of the set of GSI interrupt types that are currently
enabled by recording the mask value to write (or last written) to
the TYPE_IRQ_MSK register.

Create a new helper function gsi_irq_type_update() to handle
actually writing the register.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Alex Elder
97eb94c8c7 net: ipa: disable all GSI interrupt types initially
Introduce gsi_irq_setup() and gsi_irq_teardown() to disable all
GSI interrupts when first setting up GSI hardware, and to clean
things up when we're done.

Re-enable all GSI interrupt types in gsi_irq_enable(), but do
so only after each of the type-specific interrupt masks has
been configured.  Similarly, disable all interrupt types in
gsi_irq_disable()--first--before zeroing out the type-specific
masks.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Alex Elder
f9b28804ab net: ipa: define GSI interrupt types with an enum
Define the GSI interrupt types with an enumerated type whose values
are the bit positions representing each interrupt type.  Include a
short comment describing how each interrupt type is used.

Build up the enabled interrupt mask explicitly in gsi_irq_enable(),
and get rid of the definition of GSI_CNTXT_TYPE_IRQ_MSK_ALL.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Alex Elder
a054539db1 net: ipa: rename gsi->event_enable_bitmap
Rename the "event_enable_bitmap" field of the GSI structure to be
"ieob_enabled_bitmap".  An upcoming patch will cache the last value
stored for another interrupt mask and this is a more direct naming
convention to follow.

Add a few comments to explain the bitmap fields in the GSI structure.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Alex Elder
0b8d676108 net: ipa: request GSI IRQ later
Introduce gsi_irq_init() and gsi_irq_exit(), to encapsulate looking
up the GSI IRQ and registering its handler.  Call gsi_irq_init() a
little later in gsi_init(), and initialize the completion earlier.
The IRQ handler accesses both the GSI virtual memory pointer and the
completion, and this way these things will have been initialized
before the gsi_irq() can ever be called.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Alex Elder
4a04d65c96 net: ipa: refer to IPA versions, not GSI
The GSI code is now exposed to IPA version numbers, and we handle
version-specific behavior based on the IPA version.

Modify some comments that talk about GSI versions so they reference
IPA versions instead.  Correct version number errors in a couple of
these comments.

The (comment) mapping between IPA and GSI versions in the definition
of the ipa_version enumerated type remains.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 15:39:16 -08:00
Xie He
f8ae7bbec7 net: x25_asy: Delete the x25_asy driver
This driver transports LAPB (X.25 link layer) frames over TTY links.

I can safely say that this driver has no actual user because it was
not working at all until:
commit 8fdcabeac3 ("drivers/net/wan/x25_asy: Fix to make it work")

The code in its current state still has problems:

1.
The uses of "struct x25_asy" in x25_asy_unesc (when receiving) and in
x25_asy_write_wakeup (when sending) are not protected by locks against
x25_asy_change_mtu's changing of the transmitting/receiving buffers.
Also, all "netif_running" checks in this driver are not protected by
locks against the ndo_stop function.

2.
The driver stops all TTY read/write when the netif is down.
I think this is not right because this may cause the last outgoing frame
before the netif goes down to be incompletely transmitted, and the first
incoming frame after the netif goes up to be incompletely received.

And there may also be other problems.

I was planning to fix these problems but after recent discussions about
deleting other old networking code, I think we may just delete this
driver, too.

Signed-off-by: Xie He <xie.he.0141@gmail.com>
Acked-by: Martin Schiller <ms@dev.tdt.de>
Link: https://lore.kernel.org/r/20201105073434.429307-1-xie.he.0141@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 14:13:44 -08:00
Parshuram Thombare
0012eeb370 net: macb: fix NULL dereference due to no pcs_config method
This patch fixes NULL pointer dereference due to NULL pcs_config
in pcs_ops.

Fixes: e4e143e26c ("net: macb: add support for high speed interface")
Reported-by: Nicolas Ferre <Nicolas.Ferre@microchip.com>
Link: https://lore.kernel.org/netdev/2db854c7-9ffb-328a-f346-f68982723d29@microchip.com/
Signed-off-by: Parshuram Thombare <pthombar@cadence.com>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Link: https://lore.kernel.org/r/1604599113-2488-1-git-send-email-pthombar@cadence.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 13:21:38 -08:00
Min Li
6c196f36f5 ptp: idt82p33: optimize _idt82p33_adjfine
Use div_s64 so that the neg_adj is not needed.

Signed-off-by: Min Li <min.li.xe@renesas.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Link: https://lore.kernel.org/r/1604634729-24960-3-git-send-email-min.li.xe@renesas.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 13:10:31 -08:00
Min Li
e4c6eb6834 ptp: idt82p33: use i2c_master_send for bus write
Refactor idt82p33_xfer and use i2c_master_send for write operation.
Because some I2C controllers are only working with single-burst write
transaction.

Signed-off-by: Min Li <min.li.xe@renesas.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Link: https://lore.kernel.org/r/1604634729-24960-2-git-send-email-min.li.xe@renesas.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 13:10:27 -08:00
Min Li
e014ae3949 ptp: idt82p33: add adjphase support
Add idt82p33_adjphase() to support PHC write phase mode.

Signed-off-by: Min Li <min.li.xe@renesas.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Link: https://lore.kernel.org/r/1604634729-24960-1-git-send-email-min.li.xe@renesas.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 13:10:20 -08:00
Menglong Dong
419a38cecf net: macvlan: remove redundant initialization in macvlan_dev_netpoll_setup
The initialization for err with 0 seems useless, as it is soon updated
with -ENOMEM. So, we can remove it.

Changes since v1:
-Keep -ENOMEM still.

Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Link: https://lore.kernel.org/r/1604541244-3241-1-git-send-email-dong.menglong@zte.com.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 12:18:41 -08:00
Kaixu Xia
ea8146c684 cxgb4: Fix the -Wmisleading-indentation warning
Fix the gcc warning:

drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c:2673:9: warning: this 'for' clause does not guard... [-Wmisleading-indentation]
 2673 |         for (i = 0; i < n; ++i) \

Reported-by: Tosk Robot <tencent_os_robot@tencent.com>
Signed-off-by: Kaixu Xia <kaixuxia@tencent.com>
Link: https://lore.kernel.org/r/1604467444-23043-1-git-send-email-kaixuxia@tencent.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 11:56:03 -08:00
Jakub Kicinski
0798827b47 Merge branch 'net-axienet-dynamically-enable-mdio-interface'
Radhey Shyam Pandey says:

====================
net: axienet: Dynamically enable MDIO interface

This patchset dynamically enable MDIO interface. The background for this
change is coming from Cadence GEM controller(macb) in which MDC is active
only during MDIO read or write operations while the PHY registers are
read or written. It is implemented as an IP feature.

For axiethernet as dynamic MDC enable/disable is not supported in hw
we are implementing it in sw. This change doesn't affect any existing
functionality.
====================

Link: https://lore.kernel.org/r/1604402770-78045-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 11:13:55 -08:00
Clayton Rayment
253761a0e6 net: xilinx: axiethernet: Enable dynamic MDIO MDC
MDIO spec does not require an MDC at all times, only when MDIO
transactions are occurring. This patch allows the xilinx_axienet
driver to disable the MDC when not in use, and re-enable it when
needed. It also simplifies the driver by removing MDC disable
and enable in device reset sequence.

Signed-off-by: Clayton Rayment <clayton.rayment@xilinx.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 11:13:52 -08:00
Radhey Shyam Pandey
6c3cbaa0f0 net: xilinx: axiethernet: Introduce helper functions for MDC enable/disable
Introduce helper functions to enable/disable MDIO interface clock. This
change serves a preparatory patch for the coming feature to dynamically
control the management bus clock.

Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 11:13:51 -08:00
Jakub Kicinski
ad8fc41c78 Merge branch 'net-convert-tasklets-to-use-new-tasklet_setup-api'
Allen Pais says:

====================
net: convert tasklets to use new tasklet_setup API

Commit 12cc923f1c ("tasklet: Introduce new initialization API")'
introduced a new tasklet initialization API. This series converts
all the net/* drivers to use the new tasklet_setup() API

The following series is based on net-next (9faebeb2d)

v3:
 introduce qdisc_from_priv, suggested by Eric Dumazet.
v2:
  get rid of QDISC_ALIGN()
v1:
  fix kerneldoc
====================

Link: https://lore.kernel.org/r/20201103091823.586717-1-allen.lkml@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 10:41:16 -08:00
Allen Pais
158d31da1c net: xfrm: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 10:41:15 -08:00
Allen Pais
fcb8e3a328 net: smc: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07 10:41:15 -08:00