Including fixes from bluetooth and netfilter, no known blockers
for the release. Current release - regressions: - wifi: mac80211: do not abuse fq.lock in ieee80211_do_stop(), fix taking the lock before its initialized - Bluetooth: mgmt: fix double free on error path Current release - new code bugs: - eth: ice: fix tunnel checksum offload with fragmented traffic Previous releases - regressions: - tcp: md5: fix IPv4-mapped support after refactoring, don't take the pure v6 path - Revert "tcp: change pingpong threshold to 3", improving detection of interactive sessions - mld: fix netdev refcount leak in mld_{query | report}_work() due to a race - Bluetooth: - always set event mask on suspend, avoid early wake ups - L2CAP: fix use-after-free caused by l2cap_chan_put - bridge: do not send empty IFLA_AF_SPEC attribute Previous releases - always broken: - ping6: fix memleak in ipv6_renew_options() - sctp: prevent null-deref caused by over-eager error paths - virtio-net: fix the race between refill work and close, resulting in NAPI scheduled after close and a BUG() - macsec: - fix three netlink parsing bugs - avoid breaking the device state on invalid change requests - fix a memleak in another error path Misc: - dt-bindings: net: ethernet-controller: rework 'fixed-link' schema - two more batches of sysctl data race adornment Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmLi0uAACgkQMUZtbf5S IrvRlhAAuDzmhVZh2pMT44CmoPS6GGnf7k4fL7lctYa1m+AuvE+zfCx3VOK7tj7V OGecNx+CISA4xsAN4iNYVBMnq4437tuXCPDK+SZPpXlVflEP/SW9muCW8b4OwNFZ W5bswL0wlla5WYEgZmvUhEBcsfAN8r71u6j71adHpmGqoIA74Dw9VCKNzLsb8qMB 9sSeMmHDsOgDHw/rWCNo5JNr8DS35g0zMYJp1Q7Nu2H588dRk6aNXbW1nJ5Yph9c ZC7E6HweXrLHJbBC7SKf2mJfyzEIyvTvfnoi4knaBFE5b3g70ImZJ7AiDe/C37ZA bTmfT4FKZSA5WPKvd8OPxFwJBnrjd1AyHqvdY4wFsDk+PtNXzjOpBFPsNOXzBqIn ccwl9t4v/6Vre+miZyh6P0wmsZNTipcTe/CEYQQcIUGKgcx3rB5usikynoZGRc7Q cASpIaXfWEIUTw+NUd902NKgPq0oiuqjGBHc2Vl3k+4KcNUX16CtCFs6ufzIsSLe vlfcFL/myxuhbdcvNqUw5w+hyoSSA/54lvo5o+Uccgs/ifc/u7C43v6yX9gG/SQl MBa2PVWC0ztO6y6HKjd5OhrkUBenK1dLl9uGvExAi47kBImLdlttIanzZ/asUxzO 9L2Ocq2WMcBPIjsGDTQirdRMAGYqLEKSgTKf8zMHAEkRXWFfn2Y= =xnan -----END PGP SIGNATURE----- Merge tag 'net-5.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth and netfilter, no known blockers for the release. Current release - regressions: - wifi: mac80211: do not abuse fq.lock in ieee80211_do_stop(), fix taking the lock before its initialized - Bluetooth: mgmt: fix double free on error path Current release - new code bugs: - eth: ice: fix tunnel checksum offload with fragmented traffic Previous releases - regressions: - tcp: md5: fix IPv4-mapped support after refactoring, don't take the pure v6 path - Revert "tcp: change pingpong threshold to 3", improving detection of interactive sessions - mld: fix netdev refcount leak in mld_{query | report}_work() due to a race - Bluetooth: - always set event mask on suspend, avoid early wake ups - L2CAP: fix use-after-free caused by l2cap_chan_put - bridge: do not send empty IFLA_AF_SPEC attribute Previous releases - always broken: - ping6: fix memleak in ipv6_renew_options() - sctp: prevent null-deref caused by over-eager error paths - virtio-net: fix the race between refill work and close, resulting in NAPI scheduled after close and a BUG() - macsec: - fix three netlink parsing bugs - avoid breaking the device state on invalid change requests - fix a memleak in another error path Misc: - dt-bindings: net: ethernet-controller: rework 'fixed-link' schema - two more batches of sysctl data race adornment" * tag 'net-5.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (67 commits) stmmac: dwmac-mediatek: fix resource leak in probe ipv6/addrconf: fix a null-ptr-deref bug for ip6_ptr net: ping6: Fix memleak in ipv6_renew_options(). net/funeth: Fix fun_xdp_tx() and XDP packet reclaim sctp: leave the err path free in sctp_stream_init to sctp_stream_free sfc: disable softirqs for ptp TX ptp: ocp: Select CRC16 in the Kconfig. tcp: md5: fix IPv4-mapped support virtio-net: fix the race between refill work and close mptcp: Do not return EINPROGRESS when subflow creation succeeds Bluetooth: L2CAP: Fix use-after-free caused by l2cap_chan_put Bluetooth: Always set event mask on suspend Bluetooth: mgmt: Fix double free on error path wifi: mac80211: do not abuse fq.lock in ieee80211_do_stop() ice: do not setup vlan for loopback VSI ice: check (DD | EOF) bits on Rx descriptor rather than (EOP | RS) ice: Fix VSIs unable to share unicast MAC ice: Fix tunnel checksum offload with fragmented traffic ice: Fix max VLANs available for VF netfilter: nft_queue: only allow supported familes and hooks ...
This commit is contained in:
commit
33ea1340ba
@ -167,70 +167,65 @@ properties:
|
||||
- in-band-status
|
||||
|
||||
fixed-link:
|
||||
allOf:
|
||||
- if:
|
||||
type: array
|
||||
then:
|
||||
deprecated: true
|
||||
items:
|
||||
- minimum: 0
|
||||
maximum: 31
|
||||
description:
|
||||
Emulated PHY ID, choose any but unique to the all
|
||||
specified fixed-links
|
||||
oneOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/uint32-array
|
||||
deprecated: true
|
||||
items:
|
||||
- minimum: 0
|
||||
maximum: 31
|
||||
description:
|
||||
Emulated PHY ID, choose any but unique to the all
|
||||
specified fixed-links
|
||||
|
||||
- enum: [0, 1]
|
||||
description:
|
||||
Duplex configuration. 0 for half duplex or 1 for
|
||||
full duplex
|
||||
- enum: [0, 1]
|
||||
description:
|
||||
Duplex configuration. 0 for half duplex or 1 for
|
||||
full duplex
|
||||
|
||||
- enum: [10, 100, 1000, 2500, 10000]
|
||||
description:
|
||||
Link speed in Mbits/sec.
|
||||
- enum: [10, 100, 1000, 2500, 10000]
|
||||
description:
|
||||
Link speed in Mbits/sec.
|
||||
|
||||
- enum: [0, 1]
|
||||
description:
|
||||
Pause configuration. 0 for no pause, 1 for pause
|
||||
- enum: [0, 1]
|
||||
description:
|
||||
Pause configuration. 0 for no pause, 1 for pause
|
||||
|
||||
- enum: [0, 1]
|
||||
description:
|
||||
Asymmetric pause configuration. 0 for no asymmetric
|
||||
pause, 1 for asymmetric pause
|
||||
- enum: [0, 1]
|
||||
description:
|
||||
Asymmetric pause configuration. 0 for no asymmetric
|
||||
pause, 1 for asymmetric pause
|
||||
- type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
speed:
|
||||
description:
|
||||
Link speed.
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
enum: [10, 100, 1000, 2500, 10000]
|
||||
|
||||
full-duplex:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
Indicates that full-duplex is used. When absent, half
|
||||
duplex is assumed.
|
||||
|
||||
- if:
|
||||
type: object
|
||||
then:
|
||||
properties:
|
||||
speed:
|
||||
description:
|
||||
Link speed.
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
enum: [10, 100, 1000, 2500, 10000]
|
||||
pause:
|
||||
$ref: /schemas/types.yaml#definitions/flag
|
||||
description:
|
||||
Indicates that pause should be enabled.
|
||||
|
||||
full-duplex:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
Indicates that full-duplex is used. When absent, half
|
||||
duplex is assumed.
|
||||
asym-pause:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
Indicates that asym_pause should be enabled.
|
||||
|
||||
pause:
|
||||
$ref: /schemas/types.yaml#definitions/flag
|
||||
description:
|
||||
Indicates that pause should be enabled.
|
||||
link-gpios:
|
||||
maxItems: 1
|
||||
description:
|
||||
GPIO to determine if the link is up
|
||||
|
||||
asym-pause:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
Indicates that asym_pause should be enabled.
|
||||
|
||||
link-gpios:
|
||||
maxItems: 1
|
||||
description:
|
||||
GPIO to determine if the link is up
|
||||
|
||||
required:
|
||||
- speed
|
||||
required:
|
||||
- speed
|
||||
|
||||
additionalProperties: true
|
||||
|
||||
|
@ -183,6 +183,7 @@ properties:
|
||||
Should specify the gpio for phy reset.
|
||||
|
||||
phy-reset-duration:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
deprecated: true
|
||||
description:
|
||||
Reset duration in milliseconds. Should present only if property
|
||||
@ -191,12 +192,14 @@ properties:
|
||||
and 1 millisecond will be used instead.
|
||||
|
||||
phy-reset-active-high:
|
||||
type: boolean
|
||||
deprecated: true
|
||||
description:
|
||||
If present then the reset sequence using the GPIO specified in the
|
||||
"phy-reset-gpios" property is reversed (H=reset state, L=operation state).
|
||||
|
||||
phy-reset-post-delay:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
deprecated: true
|
||||
description:
|
||||
Post reset delay in milliseconds. If present then a delay of phy-reset-post-delay
|
||||
|
@ -2866,7 +2866,14 @@ sctp_rmem - vector of 3 INTEGERs: min, default, max
|
||||
Default: 4K
|
||||
|
||||
sctp_wmem - vector of 3 INTEGERs: min, default, max
|
||||
Currently this tunable has no effect.
|
||||
Only the first value ("min") is used, "default" and "max" are
|
||||
ignored.
|
||||
|
||||
min: Minimum size of send buffer that can be used by SCTP sockets.
|
||||
It is guaranteed to each SCTP socket (but not association) even
|
||||
under moderate memory pressure.
|
||||
|
||||
Default: 4K
|
||||
|
||||
addr_scope_policy - INTEGER
|
||||
Control IPv4 address scoping - draft-stewart-tsvwg-sctp-ipv4-00
|
||||
|
@ -142,6 +142,7 @@ static void *fun_run_xdp(struct funeth_rxq *q, skb_frag_t *frags, void *buf_va,
|
||||
int ref_ok, struct funeth_txq *xdp_q)
|
||||
{
|
||||
struct bpf_prog *xdp_prog;
|
||||
struct xdp_frame *xdpf;
|
||||
struct xdp_buff xdp;
|
||||
u32 act;
|
||||
|
||||
@ -163,7 +164,9 @@ static void *fun_run_xdp(struct funeth_rxq *q, skb_frag_t *frags, void *buf_va,
|
||||
case XDP_TX:
|
||||
if (unlikely(!ref_ok))
|
||||
goto pass;
|
||||
if (!fun_xdp_tx(xdp_q, xdp.data, xdp.data_end - xdp.data))
|
||||
|
||||
xdpf = xdp_convert_buff_to_frame(&xdp);
|
||||
if (!xdpf || !fun_xdp_tx(xdp_q, xdpf))
|
||||
goto xdp_error;
|
||||
FUN_QSTAT_INC(q, xdp_tx);
|
||||
q->xdp_flush |= FUN_XDP_FLUSH_TX;
|
||||
|
@ -466,7 +466,7 @@ static unsigned int fun_xdpq_clean(struct funeth_txq *q, unsigned int budget)
|
||||
|
||||
do {
|
||||
fun_xdp_unmap(q, reclaim_idx);
|
||||
page_frag_free(q->info[reclaim_idx].vaddr);
|
||||
xdp_return_frame(q->info[reclaim_idx].xdpf);
|
||||
|
||||
trace_funeth_tx_free(q, reclaim_idx, 1, head);
|
||||
|
||||
@ -479,11 +479,11 @@ static unsigned int fun_xdpq_clean(struct funeth_txq *q, unsigned int budget)
|
||||
return npkts;
|
||||
}
|
||||
|
||||
bool fun_xdp_tx(struct funeth_txq *q, void *data, unsigned int len)
|
||||
bool fun_xdp_tx(struct funeth_txq *q, struct xdp_frame *xdpf)
|
||||
{
|
||||
struct fun_eth_tx_req *req;
|
||||
struct fun_dataop_gl *gle;
|
||||
unsigned int idx;
|
||||
unsigned int idx, len;
|
||||
dma_addr_t dma;
|
||||
|
||||
if (fun_txq_avail(q) < FUN_XDP_CLEAN_THRES)
|
||||
@ -494,7 +494,8 @@ bool fun_xdp_tx(struct funeth_txq *q, void *data, unsigned int len)
|
||||
return false;
|
||||
}
|
||||
|
||||
dma = dma_map_single(q->dma_dev, data, len, DMA_TO_DEVICE);
|
||||
len = xdpf->len;
|
||||
dma = dma_map_single(q->dma_dev, xdpf->data, len, DMA_TO_DEVICE);
|
||||
if (unlikely(dma_mapping_error(q->dma_dev, dma))) {
|
||||
FUN_QSTAT_INC(q, tx_map_err);
|
||||
return false;
|
||||
@ -514,7 +515,7 @@ bool fun_xdp_tx(struct funeth_txq *q, void *data, unsigned int len)
|
||||
gle = (struct fun_dataop_gl *)req->dataop.imm;
|
||||
fun_dataop_gl_init(gle, 0, 0, len, dma);
|
||||
|
||||
q->info[idx].vaddr = data;
|
||||
q->info[idx].xdpf = xdpf;
|
||||
|
||||
u64_stats_update_begin(&q->syncp);
|
||||
q->stats.tx_bytes += len;
|
||||
@ -545,12 +546,9 @@ int fun_xdp_xmit_frames(struct net_device *dev, int n,
|
||||
if (unlikely(q_idx >= fp->num_xdpqs))
|
||||
return -ENXIO;
|
||||
|
||||
for (q = xdpqs[q_idx], i = 0; i < n; i++) {
|
||||
const struct xdp_frame *xdpf = frames[i];
|
||||
|
||||
if (!fun_xdp_tx(q, xdpf->data, xdpf->len))
|
||||
for (q = xdpqs[q_idx], i = 0; i < n; i++)
|
||||
if (!fun_xdp_tx(q, frames[i]))
|
||||
break;
|
||||
}
|
||||
|
||||
if (unlikely(flags & XDP_XMIT_FLUSH))
|
||||
fun_txq_wr_db(q);
|
||||
@ -577,7 +575,7 @@ static void fun_xdpq_purge(struct funeth_txq *q)
|
||||
unsigned int idx = q->cons_cnt & q->mask;
|
||||
|
||||
fun_xdp_unmap(q, idx);
|
||||
page_frag_free(q->info[idx].vaddr);
|
||||
xdp_return_frame(q->info[idx].xdpf);
|
||||
q->cons_cnt++;
|
||||
}
|
||||
}
|
||||
|
@ -95,8 +95,8 @@ struct funeth_txq_stats { /* per Tx queue SW counters */
|
||||
|
||||
struct funeth_tx_info { /* per Tx descriptor state */
|
||||
union {
|
||||
struct sk_buff *skb; /* associated packet */
|
||||
void *vaddr; /* start address for XDP */
|
||||
struct sk_buff *skb; /* associated packet (sk_buff path) */
|
||||
struct xdp_frame *xdpf; /* associated XDP frame (XDP path) */
|
||||
};
|
||||
};
|
||||
|
||||
@ -245,7 +245,7 @@ static inline int fun_irq_node(const struct fun_irq *p)
|
||||
int fun_rxq_napi_poll(struct napi_struct *napi, int budget);
|
||||
int fun_txq_napi_poll(struct napi_struct *napi, int budget);
|
||||
netdev_tx_t fun_start_xmit(struct sk_buff *skb, struct net_device *netdev);
|
||||
bool fun_xdp_tx(struct funeth_txq *q, void *data, unsigned int len);
|
||||
bool fun_xdp_tx(struct funeth_txq *q, struct xdp_frame *xdpf);
|
||||
int fun_xdp_xmit_frames(struct net_device *dev, int n,
|
||||
struct xdp_frame **frames, u32 flags);
|
||||
|
||||
|
@ -1925,11 +1925,15 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
|
||||
* non-zero req_queue_pairs says that user requested a new
|
||||
* queue count via ethtool's set_channels, so use this
|
||||
* value for queues distribution across traffic classes
|
||||
* We need at least one queue pair for the interface
|
||||
* to be usable as we see in else statement.
|
||||
*/
|
||||
if (vsi->req_queue_pairs > 0)
|
||||
vsi->num_queue_pairs = vsi->req_queue_pairs;
|
||||
else if (pf->flags & I40E_FLAG_MSIX_ENABLED)
|
||||
vsi->num_queue_pairs = pf->num_lan_msix;
|
||||
else
|
||||
vsi->num_queue_pairs = 1;
|
||||
}
|
||||
|
||||
/* Number of queues per enabled TC */
|
||||
|
@ -658,7 +658,8 @@ static int ice_lbtest_receive_frames(struct ice_rx_ring *rx_ring)
|
||||
rx_desc = ICE_RX_DESC(rx_ring, i);
|
||||
|
||||
if (!(rx_desc->wb.status_error0 &
|
||||
cpu_to_le16(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS)))
|
||||
(cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S)) |
|
||||
cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S)))))
|
||||
continue;
|
||||
|
||||
rx_buf = &rx_ring->rx_buf[i];
|
||||
|
@ -4656,6 +4656,8 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
|
||||
ice_set_safe_mode_caps(hw);
|
||||
}
|
||||
|
||||
hw->ucast_shared = true;
|
||||
|
||||
err = ice_init_pf(pf);
|
||||
if (err) {
|
||||
dev_err(dev, "ice_init_pf failed: %d\n", err);
|
||||
@ -6011,10 +6013,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi)
|
||||
if (vsi->netdev) {
|
||||
ice_set_rx_mode(vsi->netdev);
|
||||
|
||||
err = ice_vsi_vlan_setup(vsi);
|
||||
if (vsi->type != ICE_VSI_LB) {
|
||||
err = ice_vsi_vlan_setup(vsi);
|
||||
|
||||
if (err)
|
||||
return err;
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
}
|
||||
ice_vsi_cfg_dcb_rings(vsi);
|
||||
|
||||
|
@ -1309,39 +1309,6 @@ out_put_vf:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* ice_unicast_mac_exists - check if the unicast MAC exists on the PF's switch
|
||||
* @pf: PF used to reference the switch's rules
|
||||
* @umac: unicast MAC to compare against existing switch rules
|
||||
*
|
||||
* Return true on the first/any match, else return false
|
||||
*/
|
||||
static bool ice_unicast_mac_exists(struct ice_pf *pf, u8 *umac)
|
||||
{
|
||||
struct ice_sw_recipe *mac_recipe_list =
|
||||
&pf->hw.switch_info->recp_list[ICE_SW_LKUP_MAC];
|
||||
struct ice_fltr_mgmt_list_entry *list_itr;
|
||||
struct list_head *rule_head;
|
||||
struct mutex *rule_lock; /* protect MAC filter list access */
|
||||
|
||||
rule_head = &mac_recipe_list->filt_rules;
|
||||
rule_lock = &mac_recipe_list->filt_rule_lock;
|
||||
|
||||
mutex_lock(rule_lock);
|
||||
list_for_each_entry(list_itr, rule_head, list_entry) {
|
||||
u8 *existing_mac = &list_itr->fltr_info.l_data.mac.mac_addr[0];
|
||||
|
||||
if (ether_addr_equal(existing_mac, umac)) {
|
||||
mutex_unlock(rule_lock);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(rule_lock);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* ice_set_vf_mac
|
||||
* @netdev: network interface device structure
|
||||
@ -1376,13 +1343,6 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
|
||||
if (ret)
|
||||
goto out_put_vf;
|
||||
|
||||
if (ice_unicast_mac_exists(pf, mac)) {
|
||||
netdev_err(netdev, "Unicast MAC %pM already exists on this PF. Preventing setting VF %u unicast MAC address to %pM\n",
|
||||
mac, vf_id, mac);
|
||||
ret = -EINVAL;
|
||||
goto out_put_vf;
|
||||
}
|
||||
|
||||
mutex_lock(&vf->cfg_lock);
|
||||
|
||||
/* VF is notified of its new MAC via the PF's response to the
|
||||
|
@ -1751,11 +1751,13 @@ int ice_tx_csum(struct ice_tx_buf *first, struct ice_tx_offload_params *off)
|
||||
|
||||
protocol = vlan_get_protocol(skb);
|
||||
|
||||
if (eth_p_mpls(protocol))
|
||||
if (eth_p_mpls(protocol)) {
|
||||
ip.hdr = skb_inner_network_header(skb);
|
||||
else
|
||||
l4.hdr = skb_checksum_start(skb);
|
||||
} else {
|
||||
ip.hdr = skb_network_header(skb);
|
||||
l4.hdr = skb_checksum_start(skb);
|
||||
l4.hdr = skb_transport_header(skb);
|
||||
}
|
||||
|
||||
/* compute outer L2 header size */
|
||||
l2_len = ip.hdr - skb->data;
|
||||
|
@ -2948,7 +2948,8 @@ ice_vc_validate_add_vlan_filter_list(struct ice_vsi *vsi,
|
||||
struct virtchnl_vlan_filtering_caps *vfc,
|
||||
struct virtchnl_vlan_filter_list_v2 *vfl)
|
||||
{
|
||||
u16 num_requested_filters = vsi->num_vlan + vfl->num_elements;
|
||||
u16 num_requested_filters = ice_vsi_num_non_zero_vlans(vsi) +
|
||||
vfl->num_elements;
|
||||
|
||||
if (num_requested_filters > vfc->max_filters)
|
||||
return false;
|
||||
|
@ -28,6 +28,9 @@
|
||||
#define MAX_RATE_EXPONENT 0x0FULL
|
||||
#define MAX_RATE_MANTISSA 0xFFULL
|
||||
|
||||
#define CN10K_MAX_BURST_MANTISSA 0x7FFFULL
|
||||
#define CN10K_MAX_BURST_SIZE 8453888ULL
|
||||
|
||||
/* Bitfields in NIX_TLX_PIR register */
|
||||
#define TLX_RATE_MANTISSA GENMASK_ULL(8, 1)
|
||||
#define TLX_RATE_EXPONENT GENMASK_ULL(12, 9)
|
||||
@ -35,6 +38,9 @@
|
||||
#define TLX_BURST_MANTISSA GENMASK_ULL(36, 29)
|
||||
#define TLX_BURST_EXPONENT GENMASK_ULL(40, 37)
|
||||
|
||||
#define CN10K_TLX_BURST_MANTISSA GENMASK_ULL(43, 29)
|
||||
#define CN10K_TLX_BURST_EXPONENT GENMASK_ULL(47, 44)
|
||||
|
||||
struct otx2_tc_flow_stats {
|
||||
u64 bytes;
|
||||
u64 pkts;
|
||||
@ -77,33 +83,42 @@ int otx2_tc_alloc_ent_bitmap(struct otx2_nic *nic)
|
||||
}
|
||||
EXPORT_SYMBOL(otx2_tc_alloc_ent_bitmap);
|
||||
|
||||
static void otx2_get_egress_burst_cfg(u32 burst, u32 *burst_exp,
|
||||
u32 *burst_mantissa)
|
||||
static void otx2_get_egress_burst_cfg(struct otx2_nic *nic, u32 burst,
|
||||
u32 *burst_exp, u32 *burst_mantissa)
|
||||
{
|
||||
int max_burst, max_mantissa;
|
||||
unsigned int tmp;
|
||||
|
||||
if (is_dev_otx2(nic->pdev)) {
|
||||
max_burst = MAX_BURST_SIZE;
|
||||
max_mantissa = MAX_BURST_MANTISSA;
|
||||
} else {
|
||||
max_burst = CN10K_MAX_BURST_SIZE;
|
||||
max_mantissa = CN10K_MAX_BURST_MANTISSA;
|
||||
}
|
||||
|
||||
/* Burst is calculated as
|
||||
* ((256 + BURST_MANTISSA) << (1 + BURST_EXPONENT)) / 256
|
||||
* Max supported burst size is 130,816 bytes.
|
||||
*/
|
||||
burst = min_t(u32, burst, MAX_BURST_SIZE);
|
||||
burst = min_t(u32, burst, max_burst);
|
||||
if (burst) {
|
||||
*burst_exp = ilog2(burst) ? ilog2(burst) - 1 : 0;
|
||||
tmp = burst - rounddown_pow_of_two(burst);
|
||||
if (burst < MAX_BURST_MANTISSA)
|
||||
if (burst < max_mantissa)
|
||||
*burst_mantissa = tmp * 2;
|
||||
else
|
||||
*burst_mantissa = tmp / (1ULL << (*burst_exp - 7));
|
||||
} else {
|
||||
*burst_exp = MAX_BURST_EXPONENT;
|
||||
*burst_mantissa = MAX_BURST_MANTISSA;
|
||||
*burst_mantissa = max_mantissa;
|
||||
}
|
||||
}
|
||||
|
||||
static void otx2_get_egress_rate_cfg(u32 maxrate, u32 *exp,
|
||||
static void otx2_get_egress_rate_cfg(u64 maxrate, u32 *exp,
|
||||
u32 *mantissa, u32 *div_exp)
|
||||
{
|
||||
unsigned int tmp;
|
||||
u64 tmp;
|
||||
|
||||
/* Rate calculation by hardware
|
||||
*
|
||||
@ -132,21 +147,44 @@ static void otx2_get_egress_rate_cfg(u32 maxrate, u32 *exp,
|
||||
}
|
||||
}
|
||||
|
||||
static int otx2_set_matchall_egress_rate(struct otx2_nic *nic, u32 burst, u32 maxrate)
|
||||
static u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
|
||||
u64 maxrate, u32 burst)
|
||||
{
|
||||
u32 burst_exp, burst_mantissa;
|
||||
u32 exp, mantissa, div_exp;
|
||||
u64 regval = 0;
|
||||
|
||||
/* Get exponent and mantissa values from the desired rate */
|
||||
otx2_get_egress_burst_cfg(nic, burst, &burst_exp, &burst_mantissa);
|
||||
otx2_get_egress_rate_cfg(maxrate, &exp, &mantissa, &div_exp);
|
||||
|
||||
if (is_dev_otx2(nic->pdev)) {
|
||||
regval = FIELD_PREP(TLX_BURST_EXPONENT, (u64)burst_exp) |
|
||||
FIELD_PREP(TLX_BURST_MANTISSA, (u64)burst_mantissa) |
|
||||
FIELD_PREP(TLX_RATE_DIVIDER_EXPONENT, div_exp) |
|
||||
FIELD_PREP(TLX_RATE_EXPONENT, exp) |
|
||||
FIELD_PREP(TLX_RATE_MANTISSA, mantissa) | BIT_ULL(0);
|
||||
} else {
|
||||
regval = FIELD_PREP(CN10K_TLX_BURST_EXPONENT, (u64)burst_exp) |
|
||||
FIELD_PREP(CN10K_TLX_BURST_MANTISSA, (u64)burst_mantissa) |
|
||||
FIELD_PREP(TLX_RATE_DIVIDER_EXPONENT, div_exp) |
|
||||
FIELD_PREP(TLX_RATE_EXPONENT, exp) |
|
||||
FIELD_PREP(TLX_RATE_MANTISSA, mantissa) | BIT_ULL(0);
|
||||
}
|
||||
|
||||
return regval;
|
||||
}
|
||||
|
||||
static int otx2_set_matchall_egress_rate(struct otx2_nic *nic,
|
||||
u32 burst, u64 maxrate)
|
||||
{
|
||||
struct otx2_hw *hw = &nic->hw;
|
||||
struct nix_txschq_config *req;
|
||||
u32 burst_exp, burst_mantissa;
|
||||
u32 exp, mantissa, div_exp;
|
||||
int txschq, err;
|
||||
|
||||
/* All SQs share the same TL4, so pick the first scheduler */
|
||||
txschq = hw->txschq_list[NIX_TXSCH_LVL_TL4][0];
|
||||
|
||||
/* Get exponent and mantissa values from the desired rate */
|
||||
otx2_get_egress_burst_cfg(burst, &burst_exp, &burst_mantissa);
|
||||
otx2_get_egress_rate_cfg(maxrate, &exp, &mantissa, &div_exp);
|
||||
|
||||
mutex_lock(&nic->mbox.lock);
|
||||
req = otx2_mbox_alloc_msg_nix_txschq_cfg(&nic->mbox);
|
||||
if (!req) {
|
||||
@ -157,11 +195,7 @@ static int otx2_set_matchall_egress_rate(struct otx2_nic *nic, u32 burst, u32 ma
|
||||
req->lvl = NIX_TXSCH_LVL_TL4;
|
||||
req->num_regs = 1;
|
||||
req->reg[0] = NIX_AF_TL4X_PIR(txschq);
|
||||
req->regval[0] = FIELD_PREP(TLX_BURST_EXPONENT, burst_exp) |
|
||||
FIELD_PREP(TLX_BURST_MANTISSA, burst_mantissa) |
|
||||
FIELD_PREP(TLX_RATE_DIVIDER_EXPONENT, div_exp) |
|
||||
FIELD_PREP(TLX_RATE_EXPONENT, exp) |
|
||||
FIELD_PREP(TLX_RATE_MANTISSA, mantissa) | BIT_ULL(0);
|
||||
req->regval[0] = otx2_get_txschq_rate_regval(nic, maxrate, burst);
|
||||
|
||||
err = otx2_sync_mbox_msg(&nic->mbox);
|
||||
mutex_unlock(&nic->mbox.lock);
|
||||
@ -230,7 +264,7 @@ static int otx2_tc_egress_matchall_install(struct otx2_nic *nic,
|
||||
struct netlink_ext_ack *extack = cls->common.extack;
|
||||
struct flow_action *actions = &cls->rule->action;
|
||||
struct flow_action_entry *entry;
|
||||
u32 rate;
|
||||
u64 rate;
|
||||
int err;
|
||||
|
||||
err = otx2_tc_validate_flow(nic, actions, extack);
|
||||
@ -256,7 +290,7 @@ static int otx2_tc_egress_matchall_install(struct otx2_nic *nic,
|
||||
}
|
||||
/* Convert bytes per second to Mbps */
|
||||
rate = entry->police.rate_bytes_ps * 8;
|
||||
rate = max_t(u32, rate / 1000000, 1);
|
||||
rate = max_t(u64, rate / 1000000, 1);
|
||||
err = otx2_set_matchall_egress_rate(nic, entry->police.burst, rate);
|
||||
if (err)
|
||||
return err;
|
||||
@ -614,21 +648,27 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
|
||||
|
||||
flow_spec->dport = match.key->dst;
|
||||
flow_mask->dport = match.mask->dst;
|
||||
if (ip_proto == IPPROTO_UDP)
|
||||
req->features |= BIT_ULL(NPC_DPORT_UDP);
|
||||
else if (ip_proto == IPPROTO_TCP)
|
||||
req->features |= BIT_ULL(NPC_DPORT_TCP);
|
||||
else if (ip_proto == IPPROTO_SCTP)
|
||||
req->features |= BIT_ULL(NPC_DPORT_SCTP);
|
||||
|
||||
if (flow_mask->dport) {
|
||||
if (ip_proto == IPPROTO_UDP)
|
||||
req->features |= BIT_ULL(NPC_DPORT_UDP);
|
||||
else if (ip_proto == IPPROTO_TCP)
|
||||
req->features |= BIT_ULL(NPC_DPORT_TCP);
|
||||
else if (ip_proto == IPPROTO_SCTP)
|
||||
req->features |= BIT_ULL(NPC_DPORT_SCTP);
|
||||
}
|
||||
|
||||
flow_spec->sport = match.key->src;
|
||||
flow_mask->sport = match.mask->src;
|
||||
if (ip_proto == IPPROTO_UDP)
|
||||
req->features |= BIT_ULL(NPC_SPORT_UDP);
|
||||
else if (ip_proto == IPPROTO_TCP)
|
||||
req->features |= BIT_ULL(NPC_SPORT_TCP);
|
||||
else if (ip_proto == IPPROTO_SCTP)
|
||||
req->features |= BIT_ULL(NPC_SPORT_SCTP);
|
||||
|
||||
if (flow_mask->sport) {
|
||||
if (ip_proto == IPPROTO_UDP)
|
||||
req->features |= BIT_ULL(NPC_SPORT_UDP);
|
||||
else if (ip_proto == IPPROTO_TCP)
|
||||
req->features |= BIT_ULL(NPC_SPORT_TCP);
|
||||
else if (ip_proto == IPPROTO_SCTP)
|
||||
req->features |= BIT_ULL(NPC_SPORT_SCTP);
|
||||
}
|
||||
}
|
||||
|
||||
return otx2_tc_parse_actions(nic, &rule->action, req, f, node);
|
||||
|
@ -4233,7 +4233,7 @@ static void nfp_bpf_opt_ldst_gather(struct nfp_prog *nfp_prog)
|
||||
}
|
||||
|
||||
/* If the chain is ended by an load/store pair then this
|
||||
* could serve as the new head of the the next chain.
|
||||
* could serve as the new head of the next chain.
|
||||
*/
|
||||
if (curr_pair_is_memcpy(meta1, meta2)) {
|
||||
head_ld_meta = meta1;
|
||||
|
@ -1100,7 +1100,29 @@ static void efx_ptp_xmit_skb_queue(struct efx_nic *efx, struct sk_buff *skb)
|
||||
|
||||
tx_queue = efx_channel_get_tx_queue(ptp_data->channel, type);
|
||||
if (tx_queue && tx_queue->timestamping) {
|
||||
/* This code invokes normal driver TX code which is always
|
||||
* protected from softirqs when called from generic TX code,
|
||||
* which in turn disables preemption. Look at __dev_queue_xmit
|
||||
* which uses rcu_read_lock_bh disabling preemption for RCU
|
||||
* plus disabling softirqs. We do not need RCU reader
|
||||
* protection here.
|
||||
*
|
||||
* Although it is theoretically safe for current PTP TX/RX code
|
||||
* running without disabling softirqs, there are three good
|
||||
* reasond for doing so:
|
||||
*
|
||||
* 1) The code invoked is mainly implemented for non-PTP
|
||||
* packets and it is always executed with softirqs
|
||||
* disabled.
|
||||
* 2) This being a single PTP packet, better to not
|
||||
* interrupt its processing by softirqs which can lead
|
||||
* to high latencies.
|
||||
* 3) netdev_xmit_more checks preemption is disabled and
|
||||
* triggers a BUG_ON if not.
|
||||
*/
|
||||
local_bh_disable();
|
||||
efx_enqueue_skb(tx_queue, skb);
|
||||
local_bh_enable();
|
||||
} else {
|
||||
WARN_ONCE(1, "PTP channel has no timestamped tx queue\n");
|
||||
dev_kfree_skb_any(skb);
|
||||
|
@ -688,18 +688,19 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
|
||||
|
||||
ret = mediatek_dwmac_clks_config(priv_plat, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_remove_config_dt;
|
||||
|
||||
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
|
||||
if (ret) {
|
||||
stmmac_remove_config_dt(pdev, plat_dat);
|
||||
if (ret)
|
||||
goto err_drv_probe;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_drv_probe:
|
||||
mediatek_dwmac_clks_config(priv_plat, false);
|
||||
err_remove_config_dt:
|
||||
stmmac_remove_config_dt(pdev, plat_dat);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -214,7 +214,7 @@ struct ipa_init_modem_driver_req {
|
||||
|
||||
/* The response to a IPA_QMI_INIT_DRIVER request begins with a standard
|
||||
* QMI response, but contains other information as well. Currently we
|
||||
* simply wait for the the INIT_DRIVER transaction to complete and
|
||||
* simply wait for the INIT_DRIVER transaction to complete and
|
||||
* ignore any other data that might be returned.
|
||||
*/
|
||||
struct ipa_init_modem_driver_rsp {
|
||||
|
@ -243,6 +243,7 @@ static struct macsec_cb *macsec_skb_cb(struct sk_buff *skb)
|
||||
#define DEFAULT_SEND_SCI true
|
||||
#define DEFAULT_ENCRYPT false
|
||||
#define DEFAULT_ENCODING_SA 0
|
||||
#define MACSEC_XPN_MAX_REPLAY_WINDOW (((1 << 30) - 1))
|
||||
|
||||
static bool send_sci(const struct macsec_secy *secy)
|
||||
{
|
||||
@ -1697,7 +1698,7 @@ static bool validate_add_rxsa(struct nlattr **attrs)
|
||||
return false;
|
||||
|
||||
if (attrs[MACSEC_SA_ATTR_PN] &&
|
||||
*(u64 *)nla_data(attrs[MACSEC_SA_ATTR_PN]) == 0)
|
||||
nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0)
|
||||
return false;
|
||||
|
||||
if (attrs[MACSEC_SA_ATTR_ACTIVE]) {
|
||||
@ -1753,7 +1754,8 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
|
||||
}
|
||||
|
||||
pn_len = secy->xpn ? MACSEC_XPN_PN_LEN : MACSEC_DEFAULT_PN_LEN;
|
||||
if (nla_len(tb_sa[MACSEC_SA_ATTR_PN]) != pn_len) {
|
||||
if (tb_sa[MACSEC_SA_ATTR_PN] &&
|
||||
nla_len(tb_sa[MACSEC_SA_ATTR_PN]) != pn_len) {
|
||||
pr_notice("macsec: nl: add_rxsa: bad pn length: %d != %d\n",
|
||||
nla_len(tb_sa[MACSEC_SA_ATTR_PN]), pn_len);
|
||||
rtnl_unlock();
|
||||
@ -1769,7 +1771,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
|
||||
if (nla_len(tb_sa[MACSEC_SA_ATTR_SALT]) != MACSEC_SALT_LEN) {
|
||||
pr_notice("macsec: nl: add_rxsa: bad salt length: %d != %d\n",
|
||||
nla_len(tb_sa[MACSEC_SA_ATTR_SALT]),
|
||||
MACSEC_SA_ATTR_SALT);
|
||||
MACSEC_SALT_LEN);
|
||||
rtnl_unlock();
|
||||
return -EINVAL;
|
||||
}
|
||||
@ -1842,7 +1844,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
|
||||
return 0;
|
||||
|
||||
cleanup:
|
||||
kfree(rx_sa);
|
||||
macsec_rxsa_put(rx_sa);
|
||||
rtnl_unlock();
|
||||
return err;
|
||||
}
|
||||
@ -1939,7 +1941,7 @@ static bool validate_add_txsa(struct nlattr **attrs)
|
||||
if (nla_get_u8(attrs[MACSEC_SA_ATTR_AN]) >= MACSEC_NUM_AN)
|
||||
return false;
|
||||
|
||||
if (nla_get_u32(attrs[MACSEC_SA_ATTR_PN]) == 0)
|
||||
if (nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0)
|
||||
return false;
|
||||
|
||||
if (attrs[MACSEC_SA_ATTR_ACTIVE]) {
|
||||
@ -2011,7 +2013,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
|
||||
if (nla_len(tb_sa[MACSEC_SA_ATTR_SALT]) != MACSEC_SALT_LEN) {
|
||||
pr_notice("macsec: nl: add_txsa: bad salt length: %d != %d\n",
|
||||
nla_len(tb_sa[MACSEC_SA_ATTR_SALT]),
|
||||
MACSEC_SA_ATTR_SALT);
|
||||
MACSEC_SALT_LEN);
|
||||
rtnl_unlock();
|
||||
return -EINVAL;
|
||||
}
|
||||
@ -2085,7 +2087,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
|
||||
|
||||
cleanup:
|
||||
secy->operational = was_operational;
|
||||
kfree(tx_sa);
|
||||
macsec_txsa_put(tx_sa);
|
||||
rtnl_unlock();
|
||||
return err;
|
||||
}
|
||||
@ -2293,7 +2295,7 @@ static bool validate_upd_sa(struct nlattr **attrs)
|
||||
if (nla_get_u8(attrs[MACSEC_SA_ATTR_AN]) >= MACSEC_NUM_AN)
|
||||
return false;
|
||||
|
||||
if (attrs[MACSEC_SA_ATTR_PN] && nla_get_u32(attrs[MACSEC_SA_ATTR_PN]) == 0)
|
||||
if (attrs[MACSEC_SA_ATTR_PN] && nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0)
|
||||
return false;
|
||||
|
||||
if (attrs[MACSEC_SA_ATTR_ACTIVE]) {
|
||||
@ -3745,9 +3747,6 @@ static int macsec_changelink_common(struct net_device *dev,
|
||||
secy->operational = tx_sa && tx_sa->active;
|
||||
}
|
||||
|
||||
if (data[IFLA_MACSEC_WINDOW])
|
||||
secy->replay_window = nla_get_u32(data[IFLA_MACSEC_WINDOW]);
|
||||
|
||||
if (data[IFLA_MACSEC_ENCRYPT])
|
||||
tx_sc->encrypt = !!nla_get_u8(data[IFLA_MACSEC_ENCRYPT]);
|
||||
|
||||
@ -3793,6 +3792,16 @@ static int macsec_changelink_common(struct net_device *dev,
|
||||
}
|
||||
}
|
||||
|
||||
if (data[IFLA_MACSEC_WINDOW]) {
|
||||
secy->replay_window = nla_get_u32(data[IFLA_MACSEC_WINDOW]);
|
||||
|
||||
/* IEEE 802.1AEbw-2013 10.7.8 - maximum replay window
|
||||
* for XPN cipher suites */
|
||||
if (secy->xpn &&
|
||||
secy->replay_window > MACSEC_XPN_MAX_REPLAY_WINDOW)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -3822,7 +3831,7 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
|
||||
|
||||
ret = macsec_changelink_common(dev, data);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto cleanup;
|
||||
|
||||
/* If h/w offloading is available, propagate to the device */
|
||||
if (macsec_is_offloaded(macsec)) {
|
||||
|
@ -896,7 +896,7 @@ static int xpcs_get_state_c37_sgmii(struct dw_xpcs *xpcs,
|
||||
*/
|
||||
ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_AN_INTR_STS);
|
||||
if (ret < 0)
|
||||
return false;
|
||||
return ret;
|
||||
|
||||
if (ret & DW_VR_MII_C37_ANSGM_SP_LNKSTS) {
|
||||
int speed_value;
|
||||
|
@ -450,6 +450,7 @@ static int bcm5421_init(struct mii_phy* phy)
|
||||
int can_low_power = 1;
|
||||
if (np == NULL || of_get_property(np, "no-autolowpower", NULL))
|
||||
can_low_power = 0;
|
||||
of_node_put(np);
|
||||
if (can_low_power) {
|
||||
/* Enable automatic low-power */
|
||||
sungem_phy_write(phy, 0x1c, 0x9002);
|
||||
|
@ -242,9 +242,15 @@ struct virtnet_info {
|
||||
/* Packet virtio header size */
|
||||
u8 hdr_len;
|
||||
|
||||
/* Work struct for refilling if we run low on memory. */
|
||||
/* Work struct for delayed refilling if we run low on memory. */
|
||||
struct delayed_work refill;
|
||||
|
||||
/* Is delayed refill enabled? */
|
||||
bool refill_enabled;
|
||||
|
||||
/* The lock to synchronize the access to refill_enabled */
|
||||
spinlock_t refill_lock;
|
||||
|
||||
/* Work struct for config space updates */
|
||||
struct work_struct config_work;
|
||||
|
||||
@ -348,6 +354,20 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
|
||||
return p;
|
||||
}
|
||||
|
||||
static void enable_delayed_refill(struct virtnet_info *vi)
|
||||
{
|
||||
spin_lock_bh(&vi->refill_lock);
|
||||
vi->refill_enabled = true;
|
||||
spin_unlock_bh(&vi->refill_lock);
|
||||
}
|
||||
|
||||
static void disable_delayed_refill(struct virtnet_info *vi)
|
||||
{
|
||||
spin_lock_bh(&vi->refill_lock);
|
||||
vi->refill_enabled = false;
|
||||
spin_unlock_bh(&vi->refill_lock);
|
||||
}
|
||||
|
||||
static void virtqueue_napi_schedule(struct napi_struct *napi,
|
||||
struct virtqueue *vq)
|
||||
{
|
||||
@ -1527,8 +1547,12 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
|
||||
}
|
||||
|
||||
if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {
|
||||
if (!try_fill_recv(vi, rq, GFP_ATOMIC))
|
||||
schedule_delayed_work(&vi->refill, 0);
|
||||
if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {
|
||||
spin_lock(&vi->refill_lock);
|
||||
if (vi->refill_enabled)
|
||||
schedule_delayed_work(&vi->refill, 0);
|
||||
spin_unlock(&vi->refill_lock);
|
||||
}
|
||||
}
|
||||
|
||||
u64_stats_update_begin(&rq->stats.syncp);
|
||||
@ -1651,6 +1675,8 @@ static int virtnet_open(struct net_device *dev)
|
||||
struct virtnet_info *vi = netdev_priv(dev);
|
||||
int i, err;
|
||||
|
||||
enable_delayed_refill(vi);
|
||||
|
||||
for (i = 0; i < vi->max_queue_pairs; i++) {
|
||||
if (i < vi->curr_queue_pairs)
|
||||
/* Make sure we have some buffers: if oom use wq. */
|
||||
@ -2033,6 +2059,8 @@ static int virtnet_close(struct net_device *dev)
|
||||
struct virtnet_info *vi = netdev_priv(dev);
|
||||
int i;
|
||||
|
||||
/* Make sure NAPI doesn't schedule refill work */
|
||||
disable_delayed_refill(vi);
|
||||
/* Make sure refill_work doesn't re-enable napi! */
|
||||
cancel_delayed_work_sync(&vi->refill);
|
||||
|
||||
@ -2792,6 +2820,8 @@ static int virtnet_restore_up(struct virtio_device *vdev)
|
||||
|
||||
virtio_device_ready(vdev);
|
||||
|
||||
enable_delayed_refill(vi);
|
||||
|
||||
if (netif_running(vi->dev)) {
|
||||
err = virtnet_open(vi->dev);
|
||||
if (err)
|
||||
@ -3535,6 +3565,7 @@ static int virtnet_probe(struct virtio_device *vdev)
|
||||
vdev->priv = vi;
|
||||
|
||||
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
|
||||
spin_lock_init(&vi->refill_lock);
|
||||
|
||||
/* If we can receive ANY GSO packets, we must allocate large ones. */
|
||||
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
|
||||
|
@ -176,6 +176,7 @@ config PTP_1588_CLOCK_OCP
|
||||
depends on !S390
|
||||
depends on COMMON_CLK
|
||||
select NET_DEVLINK
|
||||
select CRC16
|
||||
help
|
||||
This driver adds support for an OpenCompute time card.
|
||||
|
||||
|
@ -3565,7 +3565,7 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
|
||||
if (!atomic_read(&queue->set_pci_flags_count)) {
|
||||
/*
|
||||
* there's no outstanding PCI any more, so we
|
||||
* have to request a PCI to be sure the the PCI
|
||||
* have to request a PCI to be sure the PCI
|
||||
* will wake at some time in the future then we
|
||||
* can flush packed buffers that might still be
|
||||
* hanging around, which can happen if no
|
||||
|
@ -405,6 +405,9 @@ static inline bool ip6_ignore_linkdown(const struct net_device *dev)
|
||||
{
|
||||
const struct inet6_dev *idev = __in6_dev_get(dev);
|
||||
|
||||
if (unlikely(!idev))
|
||||
return true;
|
||||
|
||||
return !!idev->cnf.ignore_routes_with_linkdown;
|
||||
}
|
||||
|
||||
|
@ -847,6 +847,7 @@ enum {
|
||||
};
|
||||
|
||||
void l2cap_chan_hold(struct l2cap_chan *c);
|
||||
struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c);
|
||||
void l2cap_chan_put(struct l2cap_chan *c);
|
||||
|
||||
static inline void l2cap_chan_lock(struct l2cap_chan *chan)
|
||||
|
@ -321,7 +321,7 @@ void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
|
||||
|
||||
struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu);
|
||||
|
||||
#define TCP_PINGPONG_THRESH 3
|
||||
#define TCP_PINGPONG_THRESH 1
|
||||
|
||||
static inline void inet_csk_enter_pingpong_mode(struct sock *sk)
|
||||
{
|
||||
@ -338,14 +338,6 @@ static inline bool inet_csk_in_pingpong_mode(struct sock *sk)
|
||||
return inet_csk(sk)->icsk_ack.pingpong >= TCP_PINGPONG_THRESH;
|
||||
}
|
||||
|
||||
static inline void inet_csk_inc_pingpong_cnt(struct sock *sk)
|
||||
{
|
||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||
|
||||
if (icsk->icsk_ack.pingpong < U8_MAX)
|
||||
icsk->icsk_ack.pingpong++;
|
||||
}
|
||||
|
||||
static inline bool inet_csk_has_ulp(struct sock *sk)
|
||||
{
|
||||
return inet_sk(sk)->is_icsk && !!inet_csk(sk)->icsk_ulp_ops;
|
||||
|
@ -2843,18 +2843,18 @@ static inline int sk_get_wmem0(const struct sock *sk, const struct proto *proto)
|
||||
{
|
||||
/* Does this proto have per netns sysctl_wmem ? */
|
||||
if (proto->sysctl_wmem_offset)
|
||||
return *(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset);
|
||||
return READ_ONCE(*(int *)((void *)sock_net(sk) + proto->sysctl_wmem_offset));
|
||||
|
||||
return *proto->sysctl_wmem;
|
||||
return READ_ONCE(*proto->sysctl_wmem);
|
||||
}
|
||||
|
||||
static inline int sk_get_rmem0(const struct sock *sk, const struct proto *proto)
|
||||
{
|
||||
/* Does this proto have per netns sysctl_rmem ? */
|
||||
if (proto->sysctl_rmem_offset)
|
||||
return *(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset);
|
||||
return READ_ONCE(*(int *)((void *)sock_net(sk) + proto->sysctl_rmem_offset));
|
||||
|
||||
return *proto->sysctl_rmem;
|
||||
return READ_ONCE(*proto->sysctl_rmem);
|
||||
}
|
||||
|
||||
/* Default TCP Small queue budget is ~1 ms of data (1sec >> 10)
|
||||
|
@ -1419,7 +1419,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space,
|
||||
|
||||
static inline int tcp_win_from_space(const struct sock *sk, int space)
|
||||
{
|
||||
int tcp_adv_win_scale = sock_net(sk)->ipv4.sysctl_tcp_adv_win_scale;
|
||||
int tcp_adv_win_scale = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_adv_win_scale);
|
||||
|
||||
return tcp_adv_win_scale <= 0 ?
|
||||
(space>>(-tcp_adv_win_scale)) :
|
||||
|
@ -4973,6 +4973,9 @@ int hci_suspend_sync(struct hci_dev *hdev)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Update event mask so only the allowed event can wakeup the host */
|
||||
hci_set_event_mask_sync(hdev);
|
||||
|
||||
/* Only configure accept list if disconnect succeeded and wake
|
||||
* isn't being prevented.
|
||||
*/
|
||||
@ -4984,9 +4987,6 @@ int hci_suspend_sync(struct hci_dev *hdev)
|
||||
/* Unpause to take care of updating scanning params */
|
||||
hdev->scanning_paused = false;
|
||||
|
||||
/* Update event mask so only the allowed event can wakeup the host */
|
||||
hci_set_event_mask_sync(hdev);
|
||||
|
||||
/* Enable event filter for paired devices */
|
||||
hci_update_event_filter_sync(hdev);
|
||||
|
||||
|
@ -111,7 +111,8 @@ static struct l2cap_chan *__l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
||||
}
|
||||
|
||||
/* Find channel with given SCID.
|
||||
* Returns locked channel. */
|
||||
* Returns a reference locked channel.
|
||||
*/
|
||||
static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
||||
u16 cid)
|
||||
{
|
||||
@ -119,15 +120,19 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_scid(conn, cid);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
/* Find channel with given DCID.
|
||||
* Returns locked channel.
|
||||
* Returns a reference locked channel.
|
||||
*/
|
||||
static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
|
||||
u16 cid)
|
||||
@ -136,8 +141,12 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_dcid(conn, cid);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
@ -162,8 +171,12 @@ static struct l2cap_chan *l2cap_get_chan_by_ident(struct l2cap_conn *conn,
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_ident(conn, ident);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
@ -497,6 +510,16 @@ void l2cap_chan_hold(struct l2cap_chan *c)
|
||||
kref_get(&c->kref);
|
||||
}
|
||||
|
||||
struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c)
|
||||
{
|
||||
BT_DBG("chan %p orig refcnt %u", c, kref_read(&c->kref));
|
||||
|
||||
if (!kref_get_unless_zero(&c->kref))
|
||||
return NULL;
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
void l2cap_chan_put(struct l2cap_chan *c)
|
||||
{
|
||||
BT_DBG("chan %p orig refcnt %u", c, kref_read(&c->kref));
|
||||
@ -1968,7 +1991,10 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
|
||||
src_match = !bacmp(&c->src, src);
|
||||
dst_match = !bacmp(&c->dst, dst);
|
||||
if (src_match && dst_match) {
|
||||
l2cap_chan_hold(c);
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (!c)
|
||||
continue;
|
||||
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
}
|
||||
@ -1983,7 +2009,7 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
|
||||
}
|
||||
|
||||
if (c1)
|
||||
l2cap_chan_hold(c1);
|
||||
c1 = l2cap_chan_hold_unless_zero(c1);
|
||||
|
||||
read_unlock(&chan_list_lock);
|
||||
|
||||
@ -4463,6 +4489,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn,
|
||||
|
||||
unlock:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -4577,6 +4604,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn,
|
||||
|
||||
done:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -5304,6 +5332,7 @@ send_move_response:
|
||||
l2cap_send_move_chan_rsp(chan, result);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -5396,6 +5425,7 @@ static void l2cap_move_continue(struct l2cap_conn *conn, u16 icid, u16 result)
|
||||
}
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
|
||||
@ -5425,6 +5455,7 @@ static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
|
||||
l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
static int l2cap_move_channel_rsp(struct l2cap_conn *conn,
|
||||
@ -5488,6 +5519,7 @@ static int l2cap_move_channel_confirm(struct l2cap_conn *conn,
|
||||
l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -5523,6 +5555,7 @@ static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn,
|
||||
}
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -5895,12 +5928,11 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn,
|
||||
if (credits > max_credits) {
|
||||
BT_ERR("LE credits overflow");
|
||||
l2cap_send_disconn_req(chan, ECONNRESET);
|
||||
l2cap_chan_unlock(chan);
|
||||
|
||||
/* Return 0 so that we don't trigger an unnecessary
|
||||
* command reject packet.
|
||||
*/
|
||||
return 0;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
chan->tx_credits += credits;
|
||||
@ -5911,7 +5943,9 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn,
|
||||
if (chan->tx_credits)
|
||||
chan->ops->resume(chan);
|
||||
|
||||
unlock:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -7597,6 +7631,7 @@ drop:
|
||||
|
||||
done:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm,
|
||||
@ -8085,7 +8120,7 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
|
||||
if (src_type != c->src_type)
|
||||
continue;
|
||||
|
||||
l2cap_chan_hold(c);
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
}
|
||||
|
@ -4723,7 +4723,6 @@ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
|
||||
else
|
||||
status = MGMT_STATUS_FAILED;
|
||||
|
||||
mgmt_pending_remove(cmd);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
|
@ -589,9 +589,13 @@ static int br_fill_ifinfo(struct sk_buff *skb,
|
||||
}
|
||||
|
||||
done:
|
||||
if (af) {
|
||||
if (nlmsg_get_pos(skb) - (void *)af > nla_attr_size(0))
|
||||
nla_nest_end(skb, af);
|
||||
else
|
||||
nla_nest_cancel(skb, af);
|
||||
}
|
||||
|
||||
if (af)
|
||||
nla_nest_end(skb, af);
|
||||
nlmsg_end(skb, nlh);
|
||||
return 0;
|
||||
|
||||
|
@ -47,7 +47,7 @@ enum caif_states {
|
||||
struct caifsock {
|
||||
struct sock sk; /* must be first member */
|
||||
struct cflayer layer;
|
||||
u32 flow_state;
|
||||
unsigned long flow_state;
|
||||
struct caif_connect_request conn_req;
|
||||
struct mutex readlock;
|
||||
struct dentry *debugfs_socket_dir;
|
||||
@ -56,38 +56,32 @@ struct caifsock {
|
||||
|
||||
static int rx_flow_is_on(struct caifsock *cf_sk)
|
||||
{
|
||||
return test_bit(RX_FLOW_ON_BIT,
|
||||
(void *) &cf_sk->flow_state);
|
||||
return test_bit(RX_FLOW_ON_BIT, &cf_sk->flow_state);
|
||||
}
|
||||
|
||||
static int tx_flow_is_on(struct caifsock *cf_sk)
|
||||
{
|
||||
return test_bit(TX_FLOW_ON_BIT,
|
||||
(void *) &cf_sk->flow_state);
|
||||
return test_bit(TX_FLOW_ON_BIT, &cf_sk->flow_state);
|
||||
}
|
||||
|
||||
static void set_rx_flow_off(struct caifsock *cf_sk)
|
||||
{
|
||||
clear_bit(RX_FLOW_ON_BIT,
|
||||
(void *) &cf_sk->flow_state);
|
||||
clear_bit(RX_FLOW_ON_BIT, &cf_sk->flow_state);
|
||||
}
|
||||
|
||||
static void set_rx_flow_on(struct caifsock *cf_sk)
|
||||
{
|
||||
set_bit(RX_FLOW_ON_BIT,
|
||||
(void *) &cf_sk->flow_state);
|
||||
set_bit(RX_FLOW_ON_BIT, &cf_sk->flow_state);
|
||||
}
|
||||
|
||||
static void set_tx_flow_off(struct caifsock *cf_sk)
|
||||
{
|
||||
clear_bit(TX_FLOW_ON_BIT,
|
||||
(void *) &cf_sk->flow_state);
|
||||
clear_bit(TX_FLOW_ON_BIT, &cf_sk->flow_state);
|
||||
}
|
||||
|
||||
static void set_tx_flow_on(struct caifsock *cf_sk)
|
||||
{
|
||||
set_bit(TX_FLOW_ON_BIT,
|
||||
(void *) &cf_sk->flow_state);
|
||||
set_bit(TX_FLOW_ON_BIT, &cf_sk->flow_state);
|
||||
}
|
||||
|
||||
static void caif_read_lock(struct sock *sk)
|
||||
|
@ -480,8 +480,8 @@ static struct sock *dn_alloc_sock(struct net *net, struct socket *sock, gfp_t gf
|
||||
sk->sk_family = PF_DECnet;
|
||||
sk->sk_protocol = 0;
|
||||
sk->sk_allocation = gfp;
|
||||
sk->sk_sndbuf = sysctl_decnet_wmem[1];
|
||||
sk->sk_rcvbuf = sysctl_decnet_rmem[1];
|
||||
sk->sk_sndbuf = READ_ONCE(sysctl_decnet_wmem[1]);
|
||||
sk->sk_rcvbuf = READ_ONCE(sysctl_decnet_rmem[1]);
|
||||
|
||||
/* Initialization of DECnet Session Control Port */
|
||||
scp = DN_SK(sk);
|
||||
|
@ -344,6 +344,7 @@ static int dsa_switch_do_lag_fdb_add(struct dsa_switch *ds, struct dsa_lag *lag,
|
||||
|
||||
ether_addr_copy(a->addr, addr);
|
||||
a->vid = vid;
|
||||
a->db = db;
|
||||
refcount_set(&a->refcount, 1);
|
||||
list_add_tail(&a->list, &lag->fdbs);
|
||||
|
||||
|
@ -1042,6 +1042,7 @@ fib_find_matching_alias(struct net *net, const struct fib_rt_info *fri)
|
||||
|
||||
void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri)
|
||||
{
|
||||
u8 fib_notify_on_flag_change;
|
||||
struct fib_alias *fa_match;
|
||||
struct sk_buff *skb;
|
||||
int err;
|
||||
@ -1063,14 +1064,16 @@ void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri)
|
||||
WRITE_ONCE(fa_match->offload, fri->offload);
|
||||
WRITE_ONCE(fa_match->trap, fri->trap);
|
||||
|
||||
fib_notify_on_flag_change = READ_ONCE(net->ipv4.sysctl_fib_notify_on_flag_change);
|
||||
|
||||
/* 2 means send notifications only if offload_failed was changed. */
|
||||
if (net->ipv4.sysctl_fib_notify_on_flag_change == 2 &&
|
||||
if (fib_notify_on_flag_change == 2 &&
|
||||
READ_ONCE(fa_match->offload_failed) == fri->offload_failed)
|
||||
goto out;
|
||||
|
||||
WRITE_ONCE(fa_match->offload_failed, fri->offload_failed);
|
||||
|
||||
if (!net->ipv4.sysctl_fib_notify_on_flag_change)
|
||||
if (!fib_notify_on_flag_change)
|
||||
goto out;
|
||||
|
||||
skb = nlmsg_new(fib_nlmsg_size(fa_match->fa_info), GFP_ATOMIC);
|
||||
|
@ -452,8 +452,8 @@ void tcp_init_sock(struct sock *sk)
|
||||
|
||||
icsk->icsk_sync_mss = tcp_sync_mss;
|
||||
|
||||
WRITE_ONCE(sk->sk_sndbuf, sock_net(sk)->ipv4.sysctl_tcp_wmem[1]);
|
||||
WRITE_ONCE(sk->sk_rcvbuf, sock_net(sk)->ipv4.sysctl_tcp_rmem[1]);
|
||||
WRITE_ONCE(sk->sk_sndbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1]));
|
||||
WRITE_ONCE(sk->sk_rcvbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1]));
|
||||
|
||||
sk_sockets_allocated_inc(sk);
|
||||
}
|
||||
@ -686,7 +686,7 @@ static bool tcp_should_autocork(struct sock *sk, struct sk_buff *skb,
|
||||
int size_goal)
|
||||
{
|
||||
return skb->len < size_goal &&
|
||||
sock_net(sk)->ipv4.sysctl_tcp_autocorking &&
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_autocorking) &&
|
||||
!tcp_rtx_queue_empty(sk) &&
|
||||
refcount_read(&sk->sk_wmem_alloc) > skb->truesize &&
|
||||
tcp_skb_can_collapse_to(skb);
|
||||
@ -1724,7 +1724,7 @@ int tcp_set_rcvlowat(struct sock *sk, int val)
|
||||
if (sk->sk_userlocks & SOCK_RCVBUF_LOCK)
|
||||
cap = sk->sk_rcvbuf >> 1;
|
||||
else
|
||||
cap = sock_net(sk)->ipv4.sysctl_tcp_rmem[2] >> 1;
|
||||
cap = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]) >> 1;
|
||||
val = min(val, cap);
|
||||
WRITE_ONCE(sk->sk_rcvlowat, val ? : 1);
|
||||
|
||||
@ -4459,9 +4459,18 @@ tcp_inbound_md5_hash(const struct sock *sk, const struct sk_buff *skb,
|
||||
return SKB_DROP_REASON_TCP_MD5UNEXPECTED;
|
||||
}
|
||||
|
||||
/* check the signature */
|
||||
genhash = tp->af_specific->calc_md5_hash(newhash, hash_expected,
|
||||
NULL, skb);
|
||||
/* Check the signature.
|
||||
* To support dual stack listeners, we need to handle
|
||||
* IPv4-mapped case.
|
||||
*/
|
||||
if (family == AF_INET)
|
||||
genhash = tcp_v4_md5_hash_skb(newhash,
|
||||
hash_expected,
|
||||
NULL, skb);
|
||||
else
|
||||
genhash = tp->af_specific->calc_md5_hash(newhash,
|
||||
hash_expected,
|
||||
NULL, skb);
|
||||
|
||||
if (genhash || memcmp(hash_location, newhash, 16) != 0) {
|
||||
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5FAILURE);
|
||||
|
@ -426,7 +426,7 @@ static void tcp_sndbuf_expand(struct sock *sk)
|
||||
|
||||
if (sk->sk_sndbuf < sndmem)
|
||||
WRITE_ONCE(sk->sk_sndbuf,
|
||||
min(sndmem, sock_net(sk)->ipv4.sysctl_tcp_wmem[2]));
|
||||
min(sndmem, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[2])));
|
||||
}
|
||||
|
||||
/* 2. Tuning advertised window (window_clamp, rcv_ssthresh)
|
||||
@ -461,7 +461,7 @@ static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb,
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
/* Optimize this! */
|
||||
int truesize = tcp_win_from_space(sk, skbtruesize) >> 1;
|
||||
int window = tcp_win_from_space(sk, sock_net(sk)->ipv4.sysctl_tcp_rmem[2]) >> 1;
|
||||
int window = tcp_win_from_space(sk, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2])) >> 1;
|
||||
|
||||
while (tp->rcv_ssthresh <= window) {
|
||||
if (truesize <= skb->len)
|
||||
@ -534,7 +534,7 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb,
|
||||
*/
|
||||
static void tcp_init_buffer_space(struct sock *sk)
|
||||
{
|
||||
int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
|
||||
int tcp_app_win = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_app_win);
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
int maxwin;
|
||||
|
||||
@ -574,16 +574,17 @@ static void tcp_clamp_window(struct sock *sk)
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||
struct net *net = sock_net(sk);
|
||||
int rmem2;
|
||||
|
||||
icsk->icsk_ack.quick = 0;
|
||||
rmem2 = READ_ONCE(net->ipv4.sysctl_tcp_rmem[2]);
|
||||
|
||||
if (sk->sk_rcvbuf < net->ipv4.sysctl_tcp_rmem[2] &&
|
||||
if (sk->sk_rcvbuf < rmem2 &&
|
||||
!(sk->sk_userlocks & SOCK_RCVBUF_LOCK) &&
|
||||
!tcp_under_memory_pressure(sk) &&
|
||||
sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0)) {
|
||||
WRITE_ONCE(sk->sk_rcvbuf,
|
||||
min(atomic_read(&sk->sk_rmem_alloc),
|
||||
net->ipv4.sysctl_tcp_rmem[2]));
|
||||
min(atomic_read(&sk->sk_rmem_alloc), rmem2));
|
||||
}
|
||||
if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)
|
||||
tp->rcv_ssthresh = min(tp->window_clamp, 2U * tp->advmss);
|
||||
@ -724,7 +725,7 @@ void tcp_rcv_space_adjust(struct sock *sk)
|
||||
* <prev RTT . ><current RTT .. ><next RTT .... >
|
||||
*/
|
||||
|
||||
if (sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf &&
|
||||
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&
|
||||
!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
|
||||
int rcvmem, rcvbuf;
|
||||
u64 rcvwin, grow;
|
||||
@ -745,7 +746,7 @@ void tcp_rcv_space_adjust(struct sock *sk)
|
||||
|
||||
do_div(rcvwin, tp->advmss);
|
||||
rcvbuf = min_t(u64, rcvwin * rcvmem,
|
||||
sock_net(sk)->ipv4.sysctl_tcp_rmem[2]);
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
|
||||
if (rcvbuf > sk->sk_rcvbuf) {
|
||||
WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
|
||||
|
||||
@ -910,9 +911,9 @@ static void tcp_update_pacing_rate(struct sock *sk)
|
||||
* end of slow start and should slow down.
|
||||
*/
|
||||
if (tcp_snd_cwnd(tp) < tp->snd_ssthresh / 2)
|
||||
rate *= sock_net(sk)->ipv4.sysctl_tcp_pacing_ss_ratio;
|
||||
rate *= READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_pacing_ss_ratio);
|
||||
else
|
||||
rate *= sock_net(sk)->ipv4.sysctl_tcp_pacing_ca_ratio;
|
||||
rate *= READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_pacing_ca_ratio);
|
||||
|
||||
rate *= max(tcp_snd_cwnd(tp), tp->packets_out);
|
||||
|
||||
@ -2175,7 +2176,7 @@ void tcp_enter_loss(struct sock *sk)
|
||||
* loss recovery is underway except recurring timeout(s) on
|
||||
* the same SND.UNA (sec 3.2). Disable F-RTO on path MTU probing
|
||||
*/
|
||||
tp->frto = net->ipv4.sysctl_tcp_frto &&
|
||||
tp->frto = READ_ONCE(net->ipv4.sysctl_tcp_frto) &&
|
||||
(new_recovery || icsk->icsk_retransmits) &&
|
||||
!inet_csk(sk)->icsk_mtup.probe_size;
|
||||
}
|
||||
@ -3058,7 +3059,7 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
|
||||
|
||||
static void tcp_update_rtt_min(struct sock *sk, u32 rtt_us, const int flag)
|
||||
{
|
||||
u32 wlen = sock_net(sk)->ipv4.sysctl_tcp_min_rtt_wlen * HZ;
|
||||
u32 wlen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_rtt_wlen) * HZ;
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
|
||||
if ((flag & FLAG_ACK_MAYBE_DELAYED) && rtt_us > tcp_min_rtt(tp)) {
|
||||
@ -3581,7 +3582,8 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
|
||||
if (*last_oow_ack_time) {
|
||||
s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time);
|
||||
|
||||
if (0 <= elapsed && elapsed < net->ipv4.sysctl_tcp_invalid_ratelimit) {
|
||||
if (0 <= elapsed &&
|
||||
elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) {
|
||||
NET_INC_STATS(net, mib_idx);
|
||||
return true; /* rate-limited: don't send yet! */
|
||||
}
|
||||
@ -3629,7 +3631,7 @@ static void tcp_send_challenge_ack(struct sock *sk)
|
||||
/* Then check host-wide RFC 5961 rate limit. */
|
||||
now = jiffies / HZ;
|
||||
if (now != challenge_timestamp) {
|
||||
u32 ack_limit = net->ipv4.sysctl_tcp_challenge_ack_limit;
|
||||
u32 ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit);
|
||||
u32 half = (ack_limit + 1) >> 1;
|
||||
|
||||
challenge_timestamp = now;
|
||||
@ -4426,7 +4428,7 @@ static void tcp_dsack_set(struct sock *sk, u32 seq, u32 end_seq)
|
||||
{
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
|
||||
if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) {
|
||||
if (tcp_is_sack(tp) && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_dsack)) {
|
||||
int mib_idx;
|
||||
|
||||
if (before(seq, tp->rcv_nxt))
|
||||
@ -4473,7 +4475,7 @@ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
|
||||
NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
|
||||
tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
|
||||
|
||||
if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) {
|
||||
if (tcp_is_sack(tp) && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_dsack)) {
|
||||
u32 end_seq = TCP_SKB_CB(skb)->end_seq;
|
||||
|
||||
tcp_rcv_spurious_retrans(sk, skb);
|
||||
@ -5519,7 +5521,7 @@ send_now:
|
||||
}
|
||||
|
||||
if (!tcp_is_sack(tp) ||
|
||||
tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr)
|
||||
tp->compressed_ack >= READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr))
|
||||
goto send_now;
|
||||
|
||||
if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) {
|
||||
@ -5540,11 +5542,12 @@ send_now:
|
||||
if (tp->srtt_us && tp->srtt_us < rtt)
|
||||
rtt = tp->srtt_us;
|
||||
|
||||
delay = min_t(unsigned long, sock_net(sk)->ipv4.sysctl_tcp_comp_sack_delay_ns,
|
||||
delay = min_t(unsigned long,
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_delay_ns),
|
||||
rtt * (NSEC_PER_USEC >> 3)/20);
|
||||
sock_hold(sk);
|
||||
hrtimer_start_range_ns(&tp->compressed_ack_timer, ns_to_ktime(delay),
|
||||
sock_net(sk)->ipv4.sysctl_tcp_comp_sack_slack_ns,
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_slack_ns),
|
||||
HRTIMER_MODE_REL_PINNED_SOFT);
|
||||
}
|
||||
|
||||
|
@ -1006,7 +1006,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
|
||||
if (skb) {
|
||||
__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
|
||||
|
||||
tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
|
||||
tos = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ?
|
||||
(tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) |
|
||||
(inet_sk(sk)->tos & INET_ECN_MASK) :
|
||||
inet_sk(sk)->tos;
|
||||
@ -1526,7 +1526,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
|
||||
/* Set ToS of the new socket based upon the value of incoming SYN.
|
||||
* ECT bits are set later in tcp_init_transfer().
|
||||
*/
|
||||
if (sock_net(sk)->ipv4.sysctl_tcp_reflect_tos)
|
||||
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos))
|
||||
newinet->tos = tcp_rsk(req)->syn_tos & ~INET_ECN_MASK;
|
||||
|
||||
if (!dst) {
|
||||
|
@ -329,7 +329,7 @@ void tcp_update_metrics(struct sock *sk)
|
||||
int m;
|
||||
|
||||
sk_dst_confirm(sk);
|
||||
if (net->ipv4.sysctl_tcp_nometrics_save || !dst)
|
||||
if (READ_ONCE(net->ipv4.sysctl_tcp_nometrics_save) || !dst)
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
@ -385,7 +385,7 @@ void tcp_update_metrics(struct sock *sk)
|
||||
|
||||
if (tcp_in_initial_slowstart(tp)) {
|
||||
/* Slow start still did not finish. */
|
||||
if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
|
||||
if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) &&
|
||||
!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) {
|
||||
val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
|
||||
if (val && (tcp_snd_cwnd(tp) >> 1) > val)
|
||||
@ -401,7 +401,7 @@ void tcp_update_metrics(struct sock *sk)
|
||||
} else if (!tcp_in_slow_start(tp) &&
|
||||
icsk->icsk_ca_state == TCP_CA_Open) {
|
||||
/* Cong. avoidance phase, cwnd is reliable. */
|
||||
if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
|
||||
if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) &&
|
||||
!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH))
|
||||
tcp_metric_set(tm, TCP_METRIC_SSTHRESH,
|
||||
max(tcp_snd_cwnd(tp) >> 1, tp->snd_ssthresh));
|
||||
@ -418,7 +418,7 @@ void tcp_update_metrics(struct sock *sk)
|
||||
tcp_metric_set(tm, TCP_METRIC_CWND,
|
||||
(val + tp->snd_ssthresh) >> 1);
|
||||
}
|
||||
if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
|
||||
if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) &&
|
||||
!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) {
|
||||
val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
|
||||
if (val && tp->snd_ssthresh > val)
|
||||
@ -463,7 +463,7 @@ void tcp_init_metrics(struct sock *sk)
|
||||
if (tcp_metric_locked(tm, TCP_METRIC_CWND))
|
||||
tp->snd_cwnd_clamp = tcp_metric_get(tm, TCP_METRIC_CWND);
|
||||
|
||||
val = net->ipv4.sysctl_tcp_no_ssthresh_metrics_save ?
|
||||
val = READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) ?
|
||||
0 : tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
|
||||
if (val) {
|
||||
tp->snd_ssthresh = val;
|
||||
|
@ -167,16 +167,13 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
|
||||
if (tcp_packets_in_flight(tp) == 0)
|
||||
tcp_ca_event(sk, CA_EVENT_TX_START);
|
||||
|
||||
/* If this is the first data packet sent in response to the
|
||||
* previous received data,
|
||||
* and it is a reply for ato after last received packet,
|
||||
* increase pingpong count.
|
||||
*/
|
||||
if (before(tp->lsndtime, icsk->icsk_ack.lrcvtime) &&
|
||||
(u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato)
|
||||
inet_csk_inc_pingpong_cnt(sk);
|
||||
|
||||
tp->lsndtime = now;
|
||||
|
||||
/* If it is a reply for ato after last received
|
||||
* packet, enter pingpong mode.
|
||||
*/
|
||||
if ((u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato)
|
||||
inet_csk_enter_pingpong_mode(sk);
|
||||
}
|
||||
|
||||
/* Account for an ACK we sent. */
|
||||
@ -230,7 +227,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space, __u32 mss,
|
||||
* which we interpret as a sign the remote TCP is not
|
||||
* misinterpreting the window field as a signed quantity.
|
||||
*/
|
||||
if (sock_net(sk)->ipv4.sysctl_tcp_workaround_signed_windows)
|
||||
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_workaround_signed_windows))
|
||||
(*rcv_wnd) = min(space, MAX_TCP_WINDOW);
|
||||
else
|
||||
(*rcv_wnd) = min_t(u32, space, U16_MAX);
|
||||
@ -241,7 +238,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space, __u32 mss,
|
||||
*rcv_wscale = 0;
|
||||
if (wscale_ok) {
|
||||
/* Set window scaling on max possible window */
|
||||
space = max_t(u32, space, sock_net(sk)->ipv4.sysctl_tcp_rmem[2]);
|
||||
space = max_t(u32, space, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
|
||||
space = max_t(u32, space, sysctl_rmem_max);
|
||||
space = min_t(u32, space, *window_clamp);
|
||||
*rcv_wscale = clamp_t(int, ilog2(space) - 15,
|
||||
@ -285,7 +282,7 @@ static u16 tcp_select_window(struct sock *sk)
|
||||
* scaled window.
|
||||
*/
|
||||
if (!tp->rx_opt.rcv_wscale &&
|
||||
sock_net(sk)->ipv4.sysctl_tcp_workaround_signed_windows)
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_workaround_signed_windows))
|
||||
new_win = min(new_win, MAX_TCP_WINDOW);
|
||||
else
|
||||
new_win = min(new_win, (65535U << tp->rx_opt.rcv_wscale));
|
||||
@ -1976,7 +1973,7 @@ static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now,
|
||||
|
||||
bytes = sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift);
|
||||
|
||||
r = tcp_min_rtt(tcp_sk(sk)) >> sock_net(sk)->ipv4.sysctl_tcp_tso_rtt_log;
|
||||
r = tcp_min_rtt(tcp_sk(sk)) >> READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_tso_rtt_log);
|
||||
if (r < BITS_PER_TYPE(sk->sk_gso_max_size))
|
||||
bytes += sk->sk_gso_max_size >> r;
|
||||
|
||||
@ -1995,7 +1992,7 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now)
|
||||
|
||||
min_tso = ca_ops->min_tso_segs ?
|
||||
ca_ops->min_tso_segs(sk) :
|
||||
sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs;
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs);
|
||||
|
||||
tso_segs = tcp_tso_autosize(sk, mss_now, min_tso);
|
||||
return min_t(u32, tso_segs, sk->sk_gso_max_segs);
|
||||
@ -2507,7 +2504,7 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb,
|
||||
sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift));
|
||||
if (sk->sk_pacing_status == SK_PACING_NONE)
|
||||
limit = min_t(unsigned long, limit,
|
||||
sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes);
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes));
|
||||
limit <<= factor;
|
||||
|
||||
if (static_branch_unlikely(&tcp_tx_delay_enabled) &&
|
||||
|
@ -1522,7 +1522,6 @@ static void mld_query_work(struct work_struct *work)
|
||||
|
||||
if (++cnt >= MLD_MAX_QUEUE) {
|
||||
rework = true;
|
||||
schedule_delayed_work(&idev->mc_query_work, 0);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1533,8 +1532,10 @@ static void mld_query_work(struct work_struct *work)
|
||||
__mld_query_work(skb);
|
||||
mutex_unlock(&idev->mc_lock);
|
||||
|
||||
if (!rework)
|
||||
in6_dev_put(idev);
|
||||
if (rework && queue_delayed_work(mld_wq, &idev->mc_query_work, 0))
|
||||
return;
|
||||
|
||||
in6_dev_put(idev);
|
||||
}
|
||||
|
||||
/* called with rcu_read_lock() */
|
||||
@ -1624,7 +1625,6 @@ static void mld_report_work(struct work_struct *work)
|
||||
|
||||
if (++cnt >= MLD_MAX_QUEUE) {
|
||||
rework = true;
|
||||
schedule_delayed_work(&idev->mc_report_work, 0);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1635,8 +1635,10 @@ static void mld_report_work(struct work_struct *work)
|
||||
__mld_report_work(skb);
|
||||
mutex_unlock(&idev->mc_lock);
|
||||
|
||||
if (!rework)
|
||||
in6_dev_put(idev);
|
||||
if (rework && queue_delayed_work(mld_wq, &idev->mc_report_work, 0))
|
||||
return;
|
||||
|
||||
in6_dev_put(idev);
|
||||
}
|
||||
|
||||
static bool is_in(struct ifmcaddr6 *pmc, struct ip6_sf_list *psf, int type,
|
||||
|
@ -22,6 +22,11 @@
|
||||
#include <linux/proc_fs.h>
|
||||
#include <net/ping.h>
|
||||
|
||||
static void ping_v6_destroy(struct sock *sk)
|
||||
{
|
||||
inet6_destroy_sock(sk);
|
||||
}
|
||||
|
||||
/* Compatibility glue so we can support IPv6 when it's compiled as a module */
|
||||
static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
|
||||
int *addr_len)
|
||||
@ -181,6 +186,7 @@ struct proto pingv6_prot = {
|
||||
.owner = THIS_MODULE,
|
||||
.init = ping_init_sock,
|
||||
.close = ping_close,
|
||||
.destroy = ping_v6_destroy,
|
||||
.connect = ip6_datagram_connect_v6_only,
|
||||
.disconnect = __udp_disconnect,
|
||||
.setsockopt = ipv6_setsockopt,
|
||||
|
@ -546,7 +546,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
|
||||
if (np->repflow && ireq->pktopts)
|
||||
fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
|
||||
|
||||
tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ?
|
||||
tclass = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ?
|
||||
(tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) |
|
||||
(np->tclass & INET_ECN_MASK) :
|
||||
np->tclass;
|
||||
@ -1314,7 +1314,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
|
||||
/* Set ToS of the new socket based upon the value of incoming SYN.
|
||||
* ECT bits are set later in tcp_init_transfer().
|
||||
*/
|
||||
if (sock_net(sk)->ipv4.sysctl_tcp_reflect_tos)
|
||||
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos))
|
||||
newnp->tclass = tcp_rsk(req)->syn_tos & ~INET_ECN_MASK;
|
||||
|
||||
/* Clone native IPv6 options from listening socket (if any)
|
||||
|
@ -377,9 +377,8 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
|
||||
bool cancel_scan;
|
||||
struct cfg80211_nan_func *func;
|
||||
|
||||
spin_lock_bh(&local->fq.lock);
|
||||
clear_bit(SDATA_STATE_RUNNING, &sdata->state);
|
||||
spin_unlock_bh(&local->fq.lock);
|
||||
synchronize_rcu(); /* flush _ieee80211_wake_txqs() */
|
||||
|
||||
cancel_scan = rcu_access_pointer(local->scan_sdata) == sdata;
|
||||
if (cancel_scan)
|
||||
|
@ -1271,7 +1271,7 @@ raise_win:
|
||||
if (unlikely(th->syn))
|
||||
new_win = min(new_win, 65535U) << tp->rx_opt.rcv_wscale;
|
||||
if (!tp->rx_opt.rcv_wscale &&
|
||||
sock_net(ssk)->ipv4.sysctl_tcp_workaround_signed_windows)
|
||||
READ_ONCE(sock_net(ssk)->ipv4.sysctl_tcp_workaround_signed_windows))
|
||||
new_win = min(new_win, MAX_TCP_WINDOW);
|
||||
else
|
||||
new_win = min(new_win, (65535U << tp->rx_opt.rcv_wscale));
|
||||
|
@ -1908,7 +1908,7 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
|
||||
if (msk->rcvq_space.copied <= msk->rcvq_space.space)
|
||||
goto new_measure;
|
||||
|
||||
if (sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf &&
|
||||
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&
|
||||
!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
|
||||
int rcvmem, rcvbuf;
|
||||
u64 rcvwin, grow;
|
||||
@ -1926,7 +1926,7 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
|
||||
|
||||
do_div(rcvwin, advmss);
|
||||
rcvbuf = min_t(u64, rcvwin * rcvmem,
|
||||
sock_net(sk)->ipv4.sysctl_tcp_rmem[2]);
|
||||
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
|
||||
|
||||
if (rcvbuf > sk->sk_rcvbuf) {
|
||||
u32 window_clamp;
|
||||
@ -2669,8 +2669,8 @@ static int mptcp_init_sock(struct sock *sk)
|
||||
mptcp_ca_reset(sk);
|
||||
|
||||
sk_sockets_allocated_inc(sk);
|
||||
sk->sk_rcvbuf = sock_net(sk)->ipv4.sysctl_tcp_rmem[1];
|
||||
sk->sk_sndbuf = sock_net(sk)->ipv4.sysctl_tcp_wmem[1];
|
||||
sk->sk_rcvbuf = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1]);
|
||||
sk->sk_sndbuf = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1]);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1533,7 +1533,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
|
||||
mptcp_sock_graft(ssk, sk->sk_socket);
|
||||
iput(SOCK_INODE(sf));
|
||||
WRITE_ONCE(msk->allow_infinite_fallback, false);
|
||||
return err;
|
||||
return 0;
|
||||
|
||||
failed_unlink:
|
||||
list_del(&subflow->node);
|
||||
|
@ -3340,6 +3340,8 @@ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain)
|
||||
if (err < 0)
|
||||
return err;
|
||||
}
|
||||
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -9367,9 +9369,13 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
list_for_each_entry(set, &ctx->table->sets, list) {
|
||||
cond_resched();
|
||||
|
||||
if (!nft_is_active_next(ctx->net, set))
|
||||
continue;
|
||||
if (!(set->flags & NFT_SET_MAP) ||
|
||||
|
@ -843,11 +843,16 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum)
|
||||
}
|
||||
|
||||
static int
|
||||
nfqnl_mangle(void *data, int data_len, struct nf_queue_entry *e, int diff)
|
||||
nfqnl_mangle(void *data, unsigned int data_len, struct nf_queue_entry *e, int diff)
|
||||
{
|
||||
struct sk_buff *nskb;
|
||||
|
||||
if (diff < 0) {
|
||||
unsigned int min_len = skb_transport_offset(e->skb);
|
||||
|
||||
if (data_len < min_len)
|
||||
return -EINVAL;
|
||||
|
||||
if (pskb_trim(e->skb, data_len))
|
||||
return -ENOMEM;
|
||||
} else if (diff > 0) {
|
||||
|
@ -68,6 +68,31 @@ static void nft_queue_sreg_eval(const struct nft_expr *expr,
|
||||
regs->verdict.code = ret;
|
||||
}
|
||||
|
||||
static int nft_queue_validate(const struct nft_ctx *ctx,
|
||||
const struct nft_expr *expr,
|
||||
const struct nft_data **data)
|
||||
{
|
||||
static const unsigned int supported_hooks = ((1 << NF_INET_PRE_ROUTING) |
|
||||
(1 << NF_INET_LOCAL_IN) |
|
||||
(1 << NF_INET_FORWARD) |
|
||||
(1 << NF_INET_LOCAL_OUT) |
|
||||
(1 << NF_INET_POST_ROUTING));
|
||||
|
||||
switch (ctx->family) {
|
||||
case NFPROTO_IPV4:
|
||||
case NFPROTO_IPV6:
|
||||
case NFPROTO_INET:
|
||||
case NFPROTO_BRIDGE:
|
||||
break;
|
||||
case NFPROTO_NETDEV: /* lacks okfn */
|
||||
fallthrough;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return nft_chain_validate_hooks(ctx->chain, supported_hooks);
|
||||
}
|
||||
|
||||
static const struct nla_policy nft_queue_policy[NFTA_QUEUE_MAX + 1] = {
|
||||
[NFTA_QUEUE_NUM] = { .type = NLA_U16 },
|
||||
[NFTA_QUEUE_TOTAL] = { .type = NLA_U16 },
|
||||
@ -164,6 +189,7 @@ static const struct nft_expr_ops nft_queue_ops = {
|
||||
.eval = nft_queue_eval,
|
||||
.init = nft_queue_init,
|
||||
.dump = nft_queue_dump,
|
||||
.validate = nft_queue_validate,
|
||||
.reduce = NFT_REDUCE_READONLY,
|
||||
};
|
||||
|
||||
@ -173,6 +199,7 @@ static const struct nft_expr_ops nft_queue_sreg_ops = {
|
||||
.eval = nft_queue_sreg_eval,
|
||||
.init = nft_queue_sreg_init,
|
||||
.dump = nft_queue_sreg_dump,
|
||||
.validate = nft_queue_validate,
|
||||
.reduce = NFT_REDUCE_READONLY,
|
||||
};
|
||||
|
||||
|
@ -229,9 +229,8 @@ static struct sctp_association *sctp_association_init(
|
||||
if (!sctp_ulpq_init(&asoc->ulpq, asoc))
|
||||
goto fail_init;
|
||||
|
||||
if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams,
|
||||
0, gfp))
|
||||
goto fail_init;
|
||||
if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 0, gfp))
|
||||
goto stream_free;
|
||||
|
||||
/* Initialize default path MTU. */
|
||||
asoc->pathmtu = sp->pathmtu;
|
||||
|
@ -137,7 +137,7 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
|
||||
|
||||
ret = sctp_stream_alloc_out(stream, outcnt, gfp);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
return ret;
|
||||
|
||||
for (i = 0; i < stream->outcnt; i++)
|
||||
SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
|
||||
@ -145,22 +145,9 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
|
||||
handle_in:
|
||||
sctp_stream_interleave_init(stream);
|
||||
if (!incnt)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
ret = sctp_stream_alloc_in(stream, incnt, gfp);
|
||||
if (ret)
|
||||
goto in_err;
|
||||
|
||||
goto out;
|
||||
|
||||
in_err:
|
||||
sched->free(stream);
|
||||
genradix_free(&stream->in);
|
||||
out_err:
|
||||
genradix_free(&stream->out);
|
||||
stream->outcnt = 0;
|
||||
out:
|
||||
return ret;
|
||||
return sctp_stream_alloc_in(stream, incnt, gfp);
|
||||
}
|
||||
|
||||
int sctp_stream_init_ext(struct sctp_stream *stream, __u16 sid)
|
||||
|
@ -160,7 +160,7 @@ int sctp_sched_set_sched(struct sctp_association *asoc,
|
||||
if (!SCTP_SO(&asoc->stream, i)->ext)
|
||||
continue;
|
||||
|
||||
ret = n->init_sid(&asoc->stream, i, GFP_KERNEL);
|
||||
ret = n->init_sid(&asoc->stream, i, GFP_ATOMIC);
|
||||
if (ret)
|
||||
goto err;
|
||||
}
|
||||
|
@ -517,7 +517,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
|
||||
timer_setup(&sk->sk_timer, tipc_sk_timeout, 0);
|
||||
sk->sk_shutdown = 0;
|
||||
sk->sk_backlog_rcv = tipc_sk_backlog_rcv;
|
||||
sk->sk_rcvbuf = sysctl_tipc_rmem[1];
|
||||
sk->sk_rcvbuf = READ_ONCE(sysctl_tipc_rmem[1]);
|
||||
sk->sk_data_ready = tipc_data_ready;
|
||||
sk->sk_write_space = tipc_write_space;
|
||||
sk->sk_destruct = tipc_sock_destruct;
|
||||
|
@ -1376,8 +1376,13 @@ static int tls_device_down(struct net_device *netdev)
|
||||
* by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW.
|
||||
* Now release the ref taken above.
|
||||
*/
|
||||
if (refcount_dec_and_test(&ctx->refcount))
|
||||
if (refcount_dec_and_test(&ctx->refcount)) {
|
||||
/* sk_destruct ran after tls_device_down took a ref, and
|
||||
* it returned early. Complete the destruction here.
|
||||
*/
|
||||
list_del(&ctx->list);
|
||||
tls_device_free_ctx(ctx);
|
||||
}
|
||||
}
|
||||
|
||||
up_write(&device_offload_lock);
|
||||
|
Loading…
Reference in New Issue
Block a user