From: Eric Dumazet Date: Thu, 7 Jul 2022 19:18:46 +0000 (+0000) Subject: net: minor optimization in __alloc_skb() X-Git-Tag: v6.1-rc5~731^2~174 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=c2dd4059dc31ee6f5b83c8d2064bb1f1f465bcec;p=platform%2Fkernel%2Flinux-starfive.git net: minor optimization in __alloc_skb() TCP allocates 'fast clones' skbs for packets in tx queues. Currently, __alloc_skb() initializes the companion fclone field to SKB_FCLONE_CLONE, and leaves other fields untouched. It makes sense to defer this init much later in skb_clone(), because all fclone fields are copied and hot in cpu caches at that time. This removes one cache line miss in __alloc_skb(), cost seen on an host with 256 cpus all competing on memory accesses. Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller --- diff --git a/net/core/skbuff.c b/net/core/skbuff.c index c62e42d..c4a7517 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -454,8 +454,6 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, skb->fclone = SKB_FCLONE_ORIG; refcount_set(&fclones->fclone_ref, 1); - - fclones->skb2.fclone = SKB_FCLONE_CLONE; } return skb; @@ -1513,6 +1511,7 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask) refcount_read(&fclones->fclone_ref) == 1) { n = &fclones->skb2; refcount_set(&fclones->fclone_ref, 2); + n->fclone = SKB_FCLONE_CLONE; } else { if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC;