From: Daniel Borkmann Date: Sat, 14 Nov 2020 01:29:00 +0000 (+0100) Subject: Merge branch 'xdp-redirect-bulk' X-Git-Tag: accepted/tizen/unified/20230118.172025~8321^2~259^2 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=c14d61fca0d10498bf267c0ab1f381dd0b35d96b;p=platform%2Fkernel%2Flinux-rpi.git Merge branch 'xdp-redirect-bulk' Lorenzo Bianconi says: ==================== XDP bulk APIs introduce a defer/flush mechanism to return pages belonging to the same xdp_mem_allocator object (identified via the mem.id field) in bulk to optimize I-cache and D-cache since xdp_return_frame is usually run inside the driver NAPI tx completion loop. Convert mvneta, mvpp2 and mlx5 drivers to xdp_return_frame_bulk APIs. More details on benchmarks run on mlx5 can be found here: https://github.com/xdp-project/xdp-project/blob/master/areas/mem/xdp_bulk_return01.org Changes since v5: - do not keep looping over ptr_ring if the cache is full but release leftover pages running page_pool_return_page Changes since v4: - fix comments - introduce xdp_frame_bulk_init utility routine - compiler annotations for I-cache code layout - move rcu_read_lock outside fast-path - mlx5 xdp bulking code optimization Changes since v3: - align DEV_MAP_BULK_SIZE to XDP_BULK_QUEUE_SIZE - refactor page_pool_put_page_bulk to avoid code duplication Changes since v2: - move mvneta changes in a dedicated patch Changes since v1: - improve comments - rework xdp_return_frame_bulk routine logic - move count and xa fields at the beginning of xdp_frame_bulk struct - invert logic in page_pool_put_page_bulk for loop ==================== Signed-off-by: Daniel Borkmann Acked-by: Jesper Dangaard Brouer --- c14d61fca0d10498bf267c0ab1f381dd0b35d96b