forked from Minki/linux
net: core: set qdisc pkt len before tc_classify
commitd2788d3488
("net: sched: further simplify handle_ing") removed the call to qdisc_enqueue_root(). However, after this removal we no longer set qdisc pkt length. This breaks traffic policing on ingress. This is the minimum fix: set qdisc pkt length before tc_classify. Only setting the length does remove support for 'stab' on ingress, but as Alexei pointed out: "Though it was allowed to add qdisc_size_table to ingress, it's useless. Nothing takes advantage of recomputed qdisc_pkt_len". Jamal suggested to use qdisc_pkt_len_init(), but as Eric mentioned that would result in qdisc_pkt_len_init to no longer get inlined due to the additional 2nd call site. ingress policing is rare and GRO doesn't really work that well with police on ingress, as we see packets > mtu and drop skbs that -- without aggregation -- would still have fitted the policier budget. Thus to have reliable/smooth ingress policing GRO has to be turned off. Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Fixes:d2788d3488
("net: sched: further simplify handle_ing") Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
0c58a2db91
commit
3365495c18
@ -3647,8 +3647,9 @@ static inline struct sk_buff *handle_ing(struct sk_buff *skb,
|
||||
*pt_prev = NULL;
|
||||
}
|
||||
|
||||
qdisc_bstats_update_cpu(cl->q, skb);
|
||||
qdisc_skb_cb(skb)->pkt_len = skb->len;
|
||||
skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);
|
||||
qdisc_bstats_update_cpu(cl->q, skb);
|
||||
|
||||
switch (tc_classify(skb, cl, &cl_res)) {
|
||||
case TC_ACT_OK:
|
||||
|
Loading…
Reference in New Issue
Block a user