bpf: Fix unnecessary -EBUSY from htab_lock_bucket
authorSong Liu <song@kernel.org>
Thu, 12 Oct 2023 05:57:41 +0000 (22:57 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 20 Nov 2023 10:59:03 +0000 (11:59 +0100)
[ Upstream commit d35381aa73f7e1e8b25f3ed5283287a64d9ddff5 ]

htab_lock_bucket uses the following logic to avoid recursion:

1. preempt_disable();
2. check percpu counter htab->map_locked[hash] for recursion;
   2.1. if map_lock[hash] is already taken, return -BUSY;
3. raw_spin_lock_irqsave();

However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
logic will not able to access the same hash of the hashtab and get -EBUSY.

This -EBUSY is not really necessary. Fix it by disabling IRQ before
checking map_locked:

1. preempt_disable();
2. local_irq_save();
3. check percpu counter htab->map_locked[hash] for recursion;
   3.1. if map_lock[hash] is already taken, return -BUSY;
4. raw_spin_lock().

Similarly, use raw_spin_unlock() and local_irq_restore() in
htab_unlock_bucket().

Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Song Liu <song@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/7a9576222aa40b1c84ad3a9ba3e64011d1a04d41.camel@linux.ibm.com
Link: https://lore.kernel.org/bpf/20231012055741.3375999-1-song@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/bpf/hashtab.c

index a8c7e1c5abfac59400cc8fb1857a4249ff3ad09e..fd8d4b0addfca0bcd23a1bcbb440c739327a131a 100644 (file)
@@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
        hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
 
        preempt_disable();
+       local_irq_save(flags);
        if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
                __this_cpu_dec(*(htab->map_locked[hash]));
+               local_irq_restore(flags);
                preempt_enable();
                return -EBUSY;
        }
 
-       raw_spin_lock_irqsave(&b->raw_lock, flags);
+       raw_spin_lock(&b->raw_lock);
        *pflags = flags;
 
        return 0;
@@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
                                      unsigned long flags)
 {
        hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
-       raw_spin_unlock_irqrestore(&b->raw_lock, flags);
+       raw_spin_unlock(&b->raw_lock);
        __this_cpu_dec(*(htab->map_locked[hash]));
+       local_irq_restore(flags);
        preempt_enable();
 }