From: Eric Dumazet Date: Fri, 20 Jan 2023 18:11:36 +0000 (+0000) Subject: selftests: net: tcp_mmap: populate pages in send path X-Git-Tag: v6.6.7~3490^2~229 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=057fb03160a88925877548979ab4ab7f9223e118;p=platform%2Fkernel%2Flinux-starfive.git selftests: net: tcp_mmap: populate pages in send path In commit 72653ae5303c ("selftests: net: tcp_mmap: Use huge pages in send path") I made a change to use hugepages for the buffer used by the client (tx path) Today, I understood that the cause for poor zerocopy performance was that after a mmap() for a 512KB memory zone, kernel uses a single zeropage, mapped 128 times. This was really the reason for poor tx path performance in zero copy mode, because this zero page refcount is under high pressure, especially when TCP ACK packets are processed on another cpu. We need either to force a COW on all the memory range, or use MAP_POPULATE so that a zero page is not abused. Signed-off-by: Eric Dumazet Link: https://lore.kernel.org/r/20230120181136.3764521-1-edumazet@google.com Signed-off-by: Jakub Kicinski --- diff --git a/tools/testing/selftests/net/tcp_mmap.c b/tools/testing/selftests/net/tcp_mmap.c index 00f837c..46a02bb 100644 --- a/tools/testing/selftests/net/tcp_mmap.c +++ b/tools/testing/selftests/net/tcp_mmap.c @@ -137,7 +137,8 @@ static void *mmap_large_buffer(size_t need, size_t *allocated) if (buffer == (void *)-1) { sz = need; buffer = mmap(NULL, sz, PROT_READ | PROT_WRITE, - MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE, + -1, 0); if (buffer != (void *)-1) fprintf(stderr, "MAP_HUGETLB attempt failed, look at /sys/kernel/mm/hugepages for optimal performance\n"); }