3 Using RCU to Protect Read-Mostly Linked Lists
4 =============================================
6 One of the best applications of RCU is to protect read-mostly linked lists
7 (``struct list_head`` in list.h). One big advantage of this approach
8 is that all of the required memory barriers are included for you in
9 the list macros. This document describes several applications of RCU,
10 with the best fits first.
13 Example 1: Read-mostly list: Deferred Destruction
14 -------------------------------------------------
16 A widely used usecase for RCU lists in the kernel is lockless iteration over
17 all processes in the system. ``task_struct::tasks`` represents the list node that
18 links all the processes. The list can be traversed in parallel to any list
19 additions or removals.
21 The traversal of the list is done using ``for_each_process()`` which is defined
24 #define next_task(p) \
25 list_entry_rcu((p)->tasks.next, struct task_struct, tasks)
27 #define for_each_process(p) \
28 for (p = &init_task ; (p = next_task(p)) != &init_task ; )
30 The code traversing the list of all processes typically looks like::
34 /* Do something with p */
38 The simplified code for removing a process from a task list is::
40 void release_task(struct task_struct *p)
42 write_lock(&tasklist_lock);
43 list_del_rcu(&p->tasks);
44 write_unlock(&tasklist_lock);
45 call_rcu(&p->rcu, delayed_put_task_struct);
48 When a process exits, ``release_task()`` calls ``list_del_rcu(&p->tasks)`` under
49 ``tasklist_lock`` writer lock protection, to remove the task from the list of
50 all tasks. The ``tasklist_lock`` prevents concurrent list additions/removals
51 from corrupting the list. Readers using ``for_each_process()`` are not protected
52 with the ``tasklist_lock``. To prevent readers from noticing changes in the list
53 pointers, the ``task_struct`` object is freed only after one or more grace
54 periods elapse (with the help of call_rcu()). This deferring of destruction
55 ensures that any readers traversing the list will see valid ``p->tasks.next``
56 pointers and deletion/freeing can happen in parallel with traversal of the list.
57 This pattern is also called an **existence lock**, since RCU pins the object in
58 memory until all existing readers finish.
61 Example 2: Read-Side Action Taken Outside of Lock: No In-Place Updates
62 ----------------------------------------------------------------------
64 The best applications are cases where, if reader-writer locking were
65 used, the read-side lock would be dropped before taking any action
66 based on the results of the search. The most celebrated example is
67 the routing table. Because the routing table is tracking the state of
68 equipment outside of the computer, it will at times contain stale data.
69 Therefore, once the route has been computed, there is no need to hold
70 the routing table static during transmission of the packet. After all,
71 you can hold the routing table static all you want, but that won't keep
72 the external Internet from changing, and it is the state of the external
73 Internet that really matters. In addition, routing entries are typically
74 added or deleted, rather than being modified in place.
76 A straightforward example of this use of RCU may be found in the
77 system-call auditing support. For example, a reader-writer locked
78 implementation of ``audit_filter_task()`` might be as follows::
80 static enum audit_state audit_filter_task(struct task_struct *tsk)
82 struct audit_entry *e;
83 enum audit_state state;
85 read_lock(&auditsc_lock);
86 /* Note: audit_filter_mutex held by caller. */
87 list_for_each_entry(e, &audit_tsklist, list) {
88 if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
89 read_unlock(&auditsc_lock);
93 read_unlock(&auditsc_lock);
94 return AUDIT_BUILD_CONTEXT;
97 Here the list is searched under the lock, but the lock is dropped before
98 the corresponding value is returned. By the time that this value is acted
99 on, the list may well have been modified. This makes sense, since if
100 you are turning auditing off, it is OK to audit a few extra system calls.
102 This means that RCU can be easily applied to the read side, as follows::
104 static enum audit_state audit_filter_task(struct task_struct *tsk)
106 struct audit_entry *e;
107 enum audit_state state;
110 /* Note: audit_filter_mutex held by caller. */
111 list_for_each_entry_rcu(e, &audit_tsklist, list) {
112 if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
118 return AUDIT_BUILD_CONTEXT;
121 The ``read_lock()`` and ``read_unlock()`` calls have become rcu_read_lock()
122 and rcu_read_unlock(), respectively, and the list_for_each_entry() has
123 become list_for_each_entry_rcu(). The **_rcu()** list-traversal primitives
124 insert the read-side memory barriers that are required on DEC Alpha CPUs.
126 The changes to the update side are also straightforward. A reader-writer lock
127 might be used as follows for deletion and insertion::
129 static inline int audit_del_rule(struct audit_rule *rule,
130 struct list_head *list)
132 struct audit_entry *e;
134 write_lock(&auditsc_lock);
135 list_for_each_entry(e, list, list) {
136 if (!audit_compare_rule(rule, &e->rule)) {
138 write_unlock(&auditsc_lock);
142 write_unlock(&auditsc_lock);
143 return -EFAULT; /* No matching rule */
146 static inline int audit_add_rule(struct audit_entry *entry,
147 struct list_head *list)
149 write_lock(&auditsc_lock);
150 if (entry->rule.flags & AUDIT_PREPEND) {
151 entry->rule.flags &= ~AUDIT_PREPEND;
152 list_add(&entry->list, list);
154 list_add_tail(&entry->list, list);
156 write_unlock(&auditsc_lock);
160 Following are the RCU equivalents for these two functions::
162 static inline int audit_del_rule(struct audit_rule *rule,
163 struct list_head *list)
165 struct audit_entry *e;
167 /* No need to use the _rcu iterator here, since this is the only
168 * deletion routine. */
169 list_for_each_entry(e, list, list) {
170 if (!audit_compare_rule(rule, &e->rule)) {
171 list_del_rcu(&e->list);
172 call_rcu(&e->rcu, audit_free_rule);
176 return -EFAULT; /* No matching rule */
179 static inline int audit_add_rule(struct audit_entry *entry,
180 struct list_head *list)
182 if (entry->rule.flags & AUDIT_PREPEND) {
183 entry->rule.flags &= ~AUDIT_PREPEND;
184 list_add_rcu(&entry->list, list);
186 list_add_tail_rcu(&entry->list, list);
191 Normally, the ``write_lock()`` and ``write_unlock()`` would be replaced by a
192 spin_lock() and a spin_unlock(). But in this case, all callers hold
193 ``audit_filter_mutex``, so no additional locking is required. The
194 ``auditsc_lock`` can therefore be eliminated, since use of RCU eliminates the
195 need for writers to exclude readers.
197 The list_del(), list_add(), and list_add_tail() primitives have been
198 replaced by list_del_rcu(), list_add_rcu(), and list_add_tail_rcu().
199 The **_rcu()** list-manipulation primitives add memory barriers that are needed on
200 weakly ordered CPUs (most of them!). The list_del_rcu() primitive omits the
201 pointer poisoning debug-assist code that would otherwise cause concurrent
202 readers to fail spectacularly.
204 So, when readers can tolerate stale data and when entries are either added or
205 deleted, without in-place modification, it is very easy to use RCU!
208 Example 3: Handling In-Place Updates
209 ------------------------------------
211 The system-call auditing code does not update auditing rules in place. However,
212 if it did, the reader-writer-locked code to do so might look as follows
213 (assuming only ``field_count`` is updated, otherwise, the added fields would
214 need to be filled in)::
216 static inline int audit_upd_rule(struct audit_rule *rule,
217 struct list_head *list,
219 __u32 newfield_count)
221 struct audit_entry *e;
222 struct audit_entry *ne;
224 write_lock(&auditsc_lock);
225 /* Note: audit_filter_mutex held by caller. */
226 list_for_each_entry(e, list, list) {
227 if (!audit_compare_rule(rule, &e->rule)) {
228 e->rule.action = newaction;
229 e->rule.field_count = newfield_count;
230 write_unlock(&auditsc_lock);
234 write_unlock(&auditsc_lock);
235 return -EFAULT; /* No matching rule */
238 The RCU version creates a copy, updates the copy, then replaces the old
239 entry with the newly updated entry. This sequence of actions, allowing
240 concurrent reads while making a copy to perform an update, is what gives
241 RCU (*read-copy update*) its name. The RCU code is as follows::
243 static inline int audit_upd_rule(struct audit_rule *rule,
244 struct list_head *list,
246 __u32 newfield_count)
248 struct audit_entry *e;
249 struct audit_entry *ne;
251 list_for_each_entry(e, list, list) {
252 if (!audit_compare_rule(rule, &e->rule)) {
253 ne = kmalloc(sizeof(*entry), GFP_ATOMIC);
256 audit_copy_rule(&ne->rule, &e->rule);
257 ne->rule.action = newaction;
258 ne->rule.field_count = newfield_count;
259 list_replace_rcu(&e->list, &ne->list);
260 call_rcu(&e->rcu, audit_free_rule);
264 return -EFAULT; /* No matching rule */
267 Again, this assumes that the caller holds ``audit_filter_mutex``. Normally, the
268 writer lock would become a spinlock in this sort of code.
270 Another use of this pattern can be found in the openswitch driver's *connection
271 tracking table* code in ``ct_limit_set()``. The table holds connection tracking
272 entries and has a limit on the maximum entries. There is one such table
273 per-zone and hence one *limit* per zone. The zones are mapped to their limits
274 through a hashtable using an RCU-managed hlist for the hash chains. When a new
275 limit is set, a new limit object is allocated and ``ct_limit_set()`` is called
276 to replace the old limit object with the new one using list_replace_rcu().
277 The old limit object is then freed after a grace period using kfree_rcu().
280 Example 4: Eliminating Stale Data
281 ---------------------------------
283 The auditing example above tolerates stale data, as do most algorithms
284 that are tracking external state. Because there is a delay from the
285 time the external state changes before Linux becomes aware of the change,
286 additional RCU-induced staleness is generally not a problem.
288 However, there are many examples where stale data cannot be tolerated.
289 One example in the Linux kernel is the System V IPC (see the shm_lock()
290 function in ipc/shm.c). This code checks a *deleted* flag under a
291 per-entry spinlock, and, if the *deleted* flag is set, pretends that the
292 entry does not exist. For this to be helpful, the search function must
293 return holding the per-entry spinlock, as shm_lock() does in fact do.
298 For the deleted-flag technique to be helpful, why is it necessary
299 to hold the per-entry lock while returning from the search function?
301 :ref:`Answer to Quick Quiz <quick_quiz_answer>`
303 If the system-call audit module were to ever need to reject stale data, one way
304 to accomplish this would be to add a ``deleted`` flag and a ``lock`` spinlock to the
305 audit_entry structure, and modify ``audit_filter_task()`` as follows::
307 static enum audit_state audit_filter_task(struct task_struct *tsk)
309 struct audit_entry *e;
310 enum audit_state state;
313 list_for_each_entry_rcu(e, &audit_tsklist, list) {
314 if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
317 spin_unlock(&e->lock);
319 return AUDIT_BUILD_CONTEXT;
326 return AUDIT_BUILD_CONTEXT;
329 Note that this example assumes that entries are only added and deleted.
330 Additional mechanism is required to deal correctly with the update-in-place
331 performed by ``audit_upd_rule()``. For one thing, ``audit_upd_rule()`` would
332 need additional memory barriers to ensure that the list_add_rcu() was really
333 executed before the list_del_rcu().
335 The ``audit_del_rule()`` function would need to set the ``deleted`` flag under the
336 spinlock as follows::
338 static inline int audit_del_rule(struct audit_rule *rule,
339 struct list_head *list)
341 struct audit_entry *e;
343 /* No need to use the _rcu iterator here, since this
344 * is the only deletion routine. */
345 list_for_each_entry(e, list, list) {
346 if (!audit_compare_rule(rule, &e->rule)) {
348 list_del_rcu(&e->list);
350 spin_unlock(&e->lock);
351 call_rcu(&e->rcu, audit_free_rule);
355 return -EFAULT; /* No matching rule */
358 This too assumes that the caller holds ``audit_filter_mutex``.
361 Example 5: Skipping Stale Objects
362 ---------------------------------
364 For some usecases, reader performance can be improved by skipping stale objects
365 during read-side list traversal if the object in concern is pending destruction
366 after one or more grace periods. One such example can be found in the timerfd
367 subsystem. When a ``CLOCK_REALTIME`` clock is reprogrammed - for example due to
368 setting of the system time, then all programmed timerfds that depend on this
369 clock get triggered and processes waiting on them to expire are woken up in
370 advance of their scheduled expiry. To facilitate this, all such timers are added
371 to an RCU-managed ``cancel_list`` when they are setup in
372 ``timerfd_setup_cancel()``::
374 static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags)
376 spin_lock(&ctx->cancel_lock);
377 if ((ctx->clockid == CLOCK_REALTIME &&
378 (flags & TFD_TIMER_ABSTIME) && (flags & TFD_TIMER_CANCEL_ON_SET)) {
379 if (!ctx->might_cancel) {
380 ctx->might_cancel = true;
381 spin_lock(&cancel_lock);
382 list_add_rcu(&ctx->clist, &cancel_list);
383 spin_unlock(&cancel_lock);
386 spin_unlock(&ctx->cancel_lock);
389 When a timerfd is freed (fd is closed), then the ``might_cancel`` flag of the
390 timerfd object is cleared, the object removed from the ``cancel_list`` and
393 int timerfd_release(struct inode *inode, struct file *file)
395 struct timerfd_ctx *ctx = file->private_data;
397 spin_lock(&ctx->cancel_lock);
398 if (ctx->might_cancel) {
399 ctx->might_cancel = false;
400 spin_lock(&cancel_lock);
401 list_del_rcu(&ctx->clist);
402 spin_unlock(&cancel_lock);
404 spin_unlock(&ctx->cancel_lock);
406 hrtimer_cancel(&ctx->t.tmr);
411 If the ``CLOCK_REALTIME`` clock is set, for example by a time server, the
412 hrtimer framework calls ``timerfd_clock_was_set()`` which walks the
413 ``cancel_list`` and wakes up processes waiting on the timerfd. While iterating
414 the ``cancel_list``, the ``might_cancel`` flag is consulted to skip stale
417 void timerfd_clock_was_set(void)
419 struct timerfd_ctx *ctx;
423 list_for_each_entry_rcu(ctx, &cancel_list, clist) {
424 if (!ctx->might_cancel)
426 spin_lock_irqsave(&ctx->wqh.lock, flags);
427 if (ctx->moffs != ktime_mono_to_real(0)) {
428 ctx->moffs = KTIME_MAX;
430 wake_up_locked_poll(&ctx->wqh, EPOLLIN);
432 spin_unlock_irqrestore(&ctx->wqh.lock, flags);
437 The key point here is, because RCU-traversal of the ``cancel_list`` happens
438 while objects are being added and removed to the list, sometimes the traversal
439 can step on an object that has been removed from the list. In this example, it
440 is seen that it is better to skip such objects using a flag.
446 Read-mostly list-based data structures that can tolerate stale data are
447 the most amenable to use of RCU. The simplest case is where entries are
448 either added or deleted from the data structure (or atomically modified
449 in place), but non-atomic in-place modifications can be handled by making
450 a copy, updating the copy, then replacing the original with the copy.
451 If stale data cannot be tolerated, then a *deleted* flag may be used
452 in conjunction with a per-entry spinlock in order to allow the search
453 function to reject newly deleted data.
455 .. _quick_quiz_answer:
457 Answer to Quick Quiz:
458 For the deleted-flag technique to be helpful, why is it necessary
459 to hold the per-entry lock while returning from the search function?
461 If the search function drops the per-entry lock before returning,
462 then the caller will be processing stale data in any case. If it
463 is really OK to be processing stale data, then you don't need a
464 *deleted* flag. If processing stale data really is a problem,
465 then you need to hold the per-entry lock across all of the code
466 that uses the value that was returned.
468 :ref:`Back to Quick Quiz <quick_quiz>`