From: Alexei Starovoitov Date: Fri, 4 Sep 2020 00:36:41 +0000 (-0700) Subject: Merge branch 'hashmap_iter_bucket_lock_fix' X-Git-Tag: v5.15~2870^2~30^2~7 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=e6135df45e21f1815a5948f452593124b1544a3e;p=platform%2Fkernel%2Flinux-starfive.git Merge branch 'hashmap_iter_bucket_lock_fix' Yonghong Song says: ==================== Currently, the bpf hashmap iterator takes a bucket_lock, a spin_lock, before visiting each element in the bucket. This will cause a deadlock if a map update/delete operates on an element with the same bucket id of the visited map. To avoid the deadlock, let us just use rcu_read_lock instead of bucket_lock. This may result in visiting stale elements, missing some elements, or repeating some elements, if concurrent map delete/update happens for the same map. I think using rcu_read_lock is a reasonable compromise. For users caring stale/missing/repeating element issues, bpf map batch access syscall interface can be used. Note that another approach is during bpf_iter link stage, we check whether the iter program might be able to do update/delete to the visited map. If it is, reject the link_create. Verifier needs to record whether an update/delete operation happens for each map for this approach. I just feel this checking is too specialized, hence still prefer rcu_read_lock approach. Patch #1 has the kernel implementation and Patch #2 added a selftest which can trigger deadlock without Patch #1. ==================== Signed-off-by: Alexei Starovoitov --- e6135df45e21f1815a5948f452593124b1544a3e