1 .. SPDX-License-Identifier: GPL-2.0
3 ====================================================================
4 Reference-count design for elements of lists/arrays protected by RCU
5 ====================================================================
8 Please note that the percpu-ref feature is likely your first
9 stop if you need to combine reference counts and RCU. Please see
10 include/linux/percpu-refcount.h for more information. However, in
11 those unusual cases where percpu-ref would consume too much memory,
14 ------------------------------------------------------------------------
16 Reference counting on elements of lists which are protected by traditional
17 reader/writer spinlocks or semaphores are straightforward:
22 add() search_and_reference()
24 alloc_object read_lock(&list_lock);
25 ... search_for_element
26 atomic_set(&el->rc, 1); atomic_inc(&el->rc);
27 write_lock(&list_lock); ...
28 add_element read_unlock(&list_lock);
30 write_unlock(&list_lock); }
34 release_referenced() delete()
36 ... write_lock(&list_lock);
37 if(atomic_dec_and_test(&el->rc)) ...
40 } write_unlock(&list_lock);
42 if (atomic_dec_and_test(&el->rc))
47 If this list/array is made lock free using RCU as in changing the
48 write_lock() in add() and delete() to spin_lock() and changing read_lock()
49 in search_and_reference() to rcu_read_lock(), the atomic_inc() in
50 search_and_reference() could potentially hold reference to an element which
51 has already been deleted from the list/array. Use atomic_inc_not_zero()
52 in this scenario as follows:
57 add() search_and_reference()
59 alloc_object rcu_read_lock();
60 ... search_for_element
61 atomic_set(&el->rc, 1); if (!atomic_inc_not_zero(&el->rc)) {
62 spin_lock(&list_lock); rcu_read_unlock();
66 spin_unlock(&list_lock); rcu_read_unlock();
69 release_referenced() delete()
71 ... spin_lock(&list_lock);
72 if (atomic_dec_and_test(&el->rc)) ...
73 call_rcu(&el->head, el_free); remove_element
74 ... spin_unlock(&list_lock);
76 if (atomic_dec_and_test(&el->rc))
77 call_rcu(&el->head, el_free);
81 Sometimes, a reference to the element needs to be obtained in the
82 update (write) stream. In such cases, atomic_inc_not_zero() might be
83 overkill, since we hold the update-side spinlock. One might instead
84 use atomic_inc() in such cases.
86 It is not always convenient to deal with "FAIL" in the
87 search_and_reference() code path. In such cases, the
88 atomic_dec_and_test() may be moved from delete() to el_free()
94 add() search_and_reference()
96 alloc_object rcu_read_lock();
97 ... search_for_element
98 atomic_set(&el->rc, 1); atomic_inc(&el->rc);
99 spin_lock(&list_lock); ...
101 add_element rcu_read_unlock();
103 spin_unlock(&list_lock); 4.
106 release_referenced() spin_lock(&list_lock);
109 if (atomic_dec_and_test(&el->rc)) spin_unlock(&list_lock);
111 ... call_rcu(&el->head, el_free);
114 void el_free(struct rcu_head *rhp)
116 release_referenced();
119 The key point is that the initial reference added by add() is not removed
120 until after a grace period has elapsed following removal. This means that
121 search_and_reference() cannot find this element, which means that the value
122 of el->rc cannot increase. Thus, once it reaches zero, there are no
123 readers that can or ever will be able to reference the element. The
124 element can therefore safely be freed. This in turn guarantees that if
125 any reader finds the element, that reader may safely acquire a reference
126 without checking the value of the reference counter.
128 A clear advantage of the RCU-based pattern in listing C over the one
129 in listing B is that any call to search_and_reference() that locates
130 a given object will succeed in obtaining a reference to that object,
131 even given a concurrent invocation of delete() for that same object.
132 Similarly, a clear advantage of both listings B and C over listing A is
133 that a call to delete() is not delayed even if there are an arbitrarily
134 large number of calls to search_and_reference() searching for the same
135 object that delete() was invoked on. Instead, all that is delayed is
136 the eventual invocation of kfree(), which is usually not a problem on
137 modern computer systems, even the small ones.
139 In cases where delete() can sleep, synchronize_rcu() can be called from
140 delete(), so that el_free() can be subsumed into delete as follows::
145 spin_lock(&list_lock);
148 spin_unlock(&list_lock);
151 if (atomic_dec_and_test(&el->rc))
156 As additional examples in the kernel, the pattern in listing C is used by
157 reference counting of struct pid, while the pattern in listing B is used by