netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
authorPablo Neira Ayuso <pablo@netfilter.org>
Sat, 14 Jan 2023 22:38:32 +0000 (23:38 +0100)
committerPablo Neira Ayuso <pablo@netfilter.org>
Mon, 23 Jan 2023 20:36:38 +0000 (21:36 +0100)
commitc9e6978e2725a7d4b6cd23b2facd3f11422c0643
treef08e1334f56aaa238eb62132126fbedf52e7ff81
parent71ab9c3e2253619136c31c89dbb2c69305cc89b1
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection

...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.

Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.

To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.

Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.

For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.

Based on initial patch from Stefano Brivio, including text from the
original patch description too.

Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
net/netfilter/nft_set_rbtree.c