From: Eric Dumazet Date: Fri, 7 May 2010 04:58:51 +0000 (-0700) Subject: net: Increase NET_SKB_PAD to 64 bytes X-Git-Tag: v2.6.35-rc1~473^2~248 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=18e8c134f4e984e6639e62846345192816f06d5c;p=profile%2Fivi%2Fkernel-x86-ivi.git net: Increase NET_SKB_PAD to 64 bytes eth_type_trans() & get_rps_cpus() currently need two 64bytes cache lines in packet to compute rxhash. Increasing NET_SKB_PAD from 32 to 64 reduces the need to one cache line only, and makes RPS faster. NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8) Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller --- diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 88d5539..c9525bc 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1361,9 +1361,12 @@ static inline int skb_network_offset(const struct sk_buff *skb) * * Various parts of the networking layer expect at least 32 bytes of * headroom, you should not reduce this. + * With RPS, we raised NET_SKB_PAD to 64 so that get_rps_cpus() fetches span + * a 64 bytes aligned block to fit modern (>= 64 bytes) cache line sizes + * NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8) */ #ifndef NET_SKB_PAD -#define NET_SKB_PAD 32 +#define NET_SKB_PAD 64 #endif extern int ___pskb_trim(struct sk_buff *skb, unsigned int len);