1 RCU and Unloadable Modules
3 [Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
5 RCU (read-copy update) is a synchronization mechanism that can be thought
6 of as a replacement for read-writer locking (among other things), but with
7 very low-overhead readers that are immune to deadlock, priority inversion,
8 and unbounded latency. RCU read-side critical sections are delimited
9 by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
10 kernels, generate no code whatsoever.
12 This means that RCU writers are unaware of the presence of concurrent
13 readers, so that RCU updates to shared data must be undertaken quite
14 carefully, leaving an old version of the data structure in place until all
15 pre-existing readers have finished. These old versions are needed because
16 such readers might hold a reference to them. RCU updates can therefore be
17 rather expensive, and RCU is thus best suited for read-mostly situations.
19 How can an RCU writer possibly determine when all readers are finished,
20 given that readers might well leave absolutely no trace of their
21 presence? There is a synchronize_rcu() primitive that blocks until all
22 pre-existing readers have completed. An updater wishing to delete an
23 element p from a linked list might do the following, while holding an
24 appropriate lock, of course:
30 But the above code cannot be used in IRQ context -- the call_rcu()
31 primitive must be used instead. This primitive takes a pointer to an
32 rcu_head struct placed within the RCU-protected data structure and
33 another pointer to a function that may be invoked later to free that
34 structure. Code to delete an element p from the linked list from IRQ
35 context might then be as follows:
38 call_rcu(&p->rcu, p_callback);
40 Since call_rcu() never blocks, this code can safely be used from within
41 IRQ context. The function p_callback() might be defined as follows:
43 static void p_callback(struct rcu_head *rp)
45 struct pstruct *p = container_of(rp, struct pstruct, rcu);
51 Unloading Modules That Use call_rcu()
53 But what if p_callback is defined in an unloadable module?
55 If we unload the module while some RCU callbacks are pending,
56 the CPUs executing these callbacks are going to be severely
57 disappointed when they are later invoked, as fancifully depicted at
58 http://lwn.net/images/ns/kernel/rcu-drop.jpg.
60 We could try placing a synchronize_rcu() in the module-exit code path,
61 but this is not sufficient. Although synchronize_rcu() does wait for a
62 grace period to elapse, it does not wait for the callbacks to complete.
64 One might be tempted to try several back-to-back synchronize_rcu()
65 calls, but this is still not guaranteed to work. If there is a very
66 heavy RCU-callback load, then some of the callbacks might be deferred
67 in order to allow other processing to proceed. Such deferral is required
68 in realtime kernels in order to avoid excessive scheduling latencies.
73 We instead need the rcu_barrier() primitive. Rather than waiting for
74 a grace period to elapse, rcu_barrier() waits for all outstanding RCU
75 callbacks to complete. Please note that rcu_barrier() does -not- imply
76 synchronize_rcu(), in particular, if there are no RCU callbacks queued
77 anywhere, rcu_barrier() is within its rights to return immediately,
78 without waiting for a grace period to elapse.
80 Pseudo-code using rcu_barrier() is as follows:
82 1. Prevent any new RCU callbacks from being posted.
83 2. Execute rcu_barrier().
84 3. Allow the module to be unloaded.
86 There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier()
87 functions for the other flavors of RCU, and you of course must match
88 the flavor of rcu_barrier() with that of call_rcu(). If your module
89 uses multiple flavors of call_rcu(), then it must also use multiple
90 flavors of rcu_barrier() when unloading that module. For example, if
91 it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on
92 srcu_struct_2(), then the following three lines of code will be required
96 2 srcu_barrier(&srcu_struct_1);
97 3 srcu_barrier(&srcu_struct_2);
99 The rcutorture module makes use of rcu_barrier() in its exit function
103 2 rcu_torture_cleanup(void)
108 7 if (shuffler_task != NULL) {
109 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
110 9 kthread_stop(shuffler_task);
112 11 shuffler_task = NULL;
114 13 if (writer_task != NULL) {
115 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
116 15 kthread_stop(writer_task);
118 17 writer_task = NULL;
120 19 if (reader_tasks != NULL) {
121 20 for (i = 0; i < nrealreaders; i++) {
122 21 if (reader_tasks[i] != NULL) {
123 22 VERBOSE_PRINTK_STRING(
124 23 "Stopping rcu_torture_reader task");
125 24 kthread_stop(reader_tasks[i]);
127 26 reader_tasks[i] = NULL;
129 28 kfree(reader_tasks);
130 29 reader_tasks = NULL;
132 31 rcu_torture_current = NULL;
134 33 if (fakewriter_tasks != NULL) {
135 34 for (i = 0; i < nfakewriters; i++) {
136 35 if (fakewriter_tasks[i] != NULL) {
137 36 VERBOSE_PRINTK_STRING(
138 37 "Stopping rcu_torture_fakewriter task");
139 38 kthread_stop(fakewriter_tasks[i]);
141 40 fakewriter_tasks[i] = NULL;
143 42 kfree(fakewriter_tasks);
144 43 fakewriter_tasks = NULL;
147 46 if (stats_task != NULL) {
148 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
149 48 kthread_stop(stats_task);
151 50 stats_task = NULL;
153 52 /* Wait for all RCU callbacks to fire. */
156 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
158 57 if (cur_ops->cleanup != NULL)
159 58 cur_ops->cleanup();
160 59 if (atomic_read(&n_rcu_torture_error))
161 60 rcu_torture_print_module_parms("End of test: FAILURE");
163 62 rcu_torture_print_module_parms("End of test: SUCCESS");
166 Line 6 sets a global variable that prevents any RCU callbacks from
167 re-posting themselves. This will not be necessary in most cases, since
168 RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
169 module is an exception to this rule, and therefore needs to set this
172 Lines 7-50 stop all the kernel tasks associated with the rcutorture
173 module. Therefore, once execution reaches line 53, no more rcutorture
174 RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
175 for any pre-existing callbacks to complete.
177 Then lines 55-62 print status and do operation-specific cleanup, and
178 then return, permitting the module-unload operation to be completed.
180 Quick Quiz #1: Is there any other situation where rcu_barrier() might
183 Your module might have additional complications. For example, if your
184 module invokes call_rcu() from timers, you will need to first cancel all
185 the timers, and only then invoke rcu_barrier() to wait for any remaining
186 RCU callbacks to complete.
188 Of course, if you module uses call_rcu_bh(), you will need to invoke
189 rcu_barrier_bh() before unloading. Similarly, if your module uses
190 call_rcu_sched(), you will need to invoke rcu_barrier_sched() before
191 unloading. If your module uses call_rcu(), call_rcu_bh(), -and-
192 call_rcu_sched(), then you will need to invoke each of rcu_barrier(),
193 rcu_barrier_bh(), and rcu_barrier_sched().
196 Implementing rcu_barrier()
198 Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
199 that RCU callbacks are never reordered once queued on one of the per-CPU
200 queues. His implementation queues an RCU callback on each of the per-CPU
201 callback queues, and then waits until they have all started executing, at
202 which point, all earlier RCU callbacks are guaranteed to have completed.
204 The original code for rcu_barrier() was as follows:
206 1 void rcu_barrier(void)
208 3 BUG_ON(in_interrupt());
209 4 /* Take cpucontrol mutex to protect against CPU hotplug */
210 5 mutex_lock(&rcu_barrier_mutex);
211 6 init_completion(&rcu_barrier_completion);
212 7 atomic_set(&rcu_barrier_cpu_count, 0);
213 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1);
214 9 wait_for_completion(&rcu_barrier_completion);
215 10 mutex_unlock(&rcu_barrier_mutex);
218 Line 3 verifies that the caller is in process context, and lines 5 and 10
219 use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
220 global completion and counters at a time, which are initialized on lines
221 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
222 shown below. Note that the final "1" in on_each_cpu()'s argument list
223 ensures that all the calls to rcu_barrier_func() will have completed
224 before on_each_cpu() returns. Line 9 then waits for the completion.
226 This code was rewritten in 2008 to support rcu_barrier_bh() and
227 rcu_barrier_sched() in addition to the original rcu_barrier().
229 The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
230 to post an RCU callback, as follows:
232 1 static void rcu_barrier_func(void *notused)
234 3 int cpu = smp_processor_id();
235 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
236 5 struct rcu_head *head;
238 7 head = &rdp->barrier;
239 8 atomic_inc(&rcu_barrier_cpu_count);
240 9 call_rcu(head, rcu_barrier_callback);
243 Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
244 which contains the struct rcu_head that needed for the later call to
245 call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
246 8 increments a global counter. This counter will later be decremented
247 by the callback. Line 9 then registers the rcu_barrier_callback() on
248 the current CPU's queue.
250 The rcu_barrier_callback() function simply atomically decrements the
251 rcu_barrier_cpu_count variable and finalizes the completion when it
252 reaches zero, as follows:
254 1 static void rcu_barrier_callback(struct rcu_head *notused)
256 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count))
257 4 complete(&rcu_barrier_completion);
260 Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
261 immediately (thus incrementing rcu_barrier_cpu_count to the
262 value one), but the other CPU's rcu_barrier_func() invocations
263 are delayed for a full grace period? Couldn't this result in
264 rcu_barrier() returning prematurely?
267 rcu_barrier() Summary
269 The rcu_barrier() primitive has seen relatively little use, since most
270 code using RCU is in the core kernel rather than in modules. However, if
271 you are using RCU from an unloadable module, you need to use rcu_barrier()
272 so that your module may be safely unloaded.
275 Answers to Quick Quizzes
277 Quick Quiz #1: Is there any other situation where rcu_barrier() might
280 Answer: Interestingly enough, rcu_barrier() was not originally
281 implemented for module unloading. Nikita Danilov was using
282 RCU in a filesystem, which resulted in a similar situation at
283 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
284 in response, so that Nikita could invoke it during the
285 filesystem-unmount process.
287 Much later, yours truly hit the RCU module-unload problem when
288 implementing rcutorture, and found that rcu_barrier() solves
289 this problem as well.
291 Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
292 immediately (thus incrementing rcu_barrier_cpu_count to the
293 value one), but the other CPU's rcu_barrier_func() invocations
294 are delayed for a full grace period? Couldn't this result in
295 rcu_barrier() returning prematurely?
297 Answer: This cannot happen. The reason is that on_each_cpu() has its last
298 argument, the wait flag, set to "1". This flag is passed through
299 to smp_call_function() and further to smp_call_function_on_cpu(),
300 causing this latter to spin until the cross-CPU invocation of
301 rcu_barrier_func() has completed. This by itself would prevent
302 a grace period from completing on non-CONFIG_PREEMPT kernels,
303 since each CPU must undergo a context switch (or other quiescent
304 state) before the grace period can complete. However, this is
305 of no use in CONFIG_PREEMPT kernels.
307 Therefore, on_each_cpu() disables preemption across its call
308 to smp_call_function() and also across the local call to
309 rcu_barrier_func(). This prevents the local CPU from context
310 switching, again preventing grace periods from completing. This
311 means that all CPUs have executed rcu_barrier_func() before
312 the first rcu_barrier_callback() can possibly execute, in turn
313 preventing rcu_barrier_cpu_count from prematurely reaching zero.
315 Currently, -rt implementations of RCU keep but a single global
316 queue for RCU callbacks, and thus do not suffer from this
317 problem. However, when the -rt RCU eventually does have per-CPU
318 callback queues, things will have to change. One simple change
319 is to add an rcu_read_lock() before line 8 of rcu_barrier()
320 and an rcu_read_unlock() after line 8 of this same function. If
321 you can think of a better change, please let me know!