1 .. _kernel_hacking_lock:
3 ===========================
4 Unreliable Guide To Locking
5 ===========================
12 Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking
13 issues. This document describes the locking systems in the Linux Kernel
16 With the wide availability of HyperThreading, and preemption in the
17 Linux Kernel, everyone hacking on the kernel needs to know the
18 fundamentals of concurrency and locking for SMP.
20 The Problem With Concurrency
21 ============================
23 (Skip this if you know what a Race Condition is).
25 In a normal program, you can increment a counter like so:
29 very_important_count++;
32 This is what they would expect to happen:
35 .. table:: Expected Results
37 +------------------------------------+------------------------------------+
38 | Instance 1 | Instance 2 |
39 +====================================+====================================+
40 | read very_important_count (5) | |
41 +------------------------------------+------------------------------------+
43 +------------------------------------+------------------------------------+
44 | write very_important_count (6) | |
45 +------------------------------------+------------------------------------+
46 | | read very_important_count (6) |
47 +------------------------------------+------------------------------------+
49 +------------------------------------+------------------------------------+
50 | | write very_important_count (7) |
51 +------------------------------------+------------------------------------+
53 This is what might happen:
55 .. table:: Possible Results
57 +------------------------------------+------------------------------------+
58 | Instance 1 | Instance 2 |
59 +====================================+====================================+
60 | read very_important_count (5) | |
61 +------------------------------------+------------------------------------+
62 | | read very_important_count (5) |
63 +------------------------------------+------------------------------------+
65 +------------------------------------+------------------------------------+
67 +------------------------------------+------------------------------------+
68 | write very_important_count (6) | |
69 +------------------------------------+------------------------------------+
70 | | write very_important_count (6) |
71 +------------------------------------+------------------------------------+
74 Race Conditions and Critical Regions
75 ------------------------------------
77 This overlap, where the result depends on the relative timing of
78 multiple tasks, is called a race condition. The piece of code containing
79 the concurrency issue is called a critical region. And especially since
80 Linux starting running on SMP machines, they became one of the major
81 issues in kernel design and implementation.
83 Preemption can have the same effect, even if there is only one CPU: by
84 preempting one task during the critical region, we have exactly the same
85 race condition. In this case the thread which preempts might run the
86 critical region itself.
88 The solution is to recognize when these simultaneous accesses occur, and
89 use locks to make sure that only one instance can enter the critical
90 region at any time. There are many friendly primitives in the Linux
91 kernel to help you do this. And then there are the unfriendly
92 primitives, but I'll pretend they don't exist.
94 Locking in the Linux Kernel
95 ===========================
97 If I could give you one piece of advice on locking: **keep it simple**.
99 Be reluctant to introduce new locks.
101 Two Main Types of Kernel Locks: Spinlocks and Mutexes
102 -----------------------------------------------------
104 There are two main types of kernel locks. The fundamental type is the
105 spinlock (``include/asm/spinlock.h``), which is a very simple
106 single-holder lock: if you can't get the spinlock, you keep trying
107 (spinning) until you can. Spinlocks are very small and fast, and can be
110 The second type is a mutex (``include/linux/mutex.h``): it is like a
111 spinlock, but you may block holding a mutex. If you can't lock a mutex,
112 your task will suspend itself, and be woken up when the mutex is
113 released. This means the CPU can do something else while you are
114 waiting. There are many cases when you simply can't sleep (see
115 `What Functions Are Safe To Call From Interrupts?`_),
116 and so have to use a spinlock instead.
118 Neither type of lock is recursive: see
119 `Deadlock: Simple and Advanced`_.
121 Locks and Uniprocessor Kernels
122 ------------------------------
124 For kernels compiled without ``CONFIG_SMP``, and without
125 ``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent
126 design decision: when no-one else can run at the same time, there is no
127 reason to have a lock.
129 If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT``
130 is set, then spinlocks simply disable preemption, which is sufficient to
131 prevent any races. For most purposes, we can think of preemption as
132 equivalent to SMP, and not worry about it separately.
134 You should always test your locking code with ``CONFIG_SMP`` and
135 ``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box,
136 because it will still catch some kinds of locking bugs.
138 Mutexes still exist, because they are required for synchronization
139 between user contexts, as we will see below.
141 Locking Only In User Context
142 ----------------------------
144 If you have a data structure which is only ever accessed from user
145 context, then you can use a simple mutex (``include/linux/mutex.h``) to
146 protect it. This is the most trivial case: you initialize the mutex.
147 Then you can call mutex_lock_interruptible() to grab the
148 mutex, and mutex_unlock() to release it. There is also a
149 mutex_lock(), which should be avoided, because it will
150 not return if a signal is received.
152 Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
153 setsockopt() and getsockopt() calls, with
154 nf_register_sockopt(). Registration and de-registration
155 are only done on module load and unload (and boot time, where there is
156 no concurrency), and the list of registrations is only consulted for an
157 unknown setsockopt() or getsockopt() system
158 call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
159 since the setsockopt and getsockopt calls may well sleep.
161 Locking Between User Context and Softirqs
162 -----------------------------------------
164 If a softirq shares data with user context, you have two problems.
165 Firstly, the current user context can be interrupted by a softirq, and
166 secondly, the critical region could be entered from another CPU. This is
167 where spin_lock_bh() (``include/linux/spinlock.h``) is
168 used. It disables softirqs on that CPU, then grabs the lock.
169 spin_unlock_bh() does the reverse. (The '_bh' suffix is
170 a historical reference to "Bottom Halves", the old name for software
171 interrupts. It should really be called spin_lock_softirq()' in a
174 Note that you can also use spin_lock_irq() or
175 spin_lock_irqsave() here, which stop hardware interrupts
176 as well: see `Hard IRQ Context`_.
178 This works perfectly for UP as well: the spin lock vanishes, and this
179 macro simply becomes local_bh_disable()
180 (``include/linux/interrupt.h``), which protects you from the softirq
183 Locking Between User Context and Tasklets
184 -----------------------------------------
186 This is exactly the same as above, because tasklets are actually run
189 Locking Between User Context and Timers
190 ---------------------------------------
192 This, too, is exactly the same as above, because timers are actually run
193 from a softirq. From a locking point of view, tasklets and timers are
196 Locking Between Tasklets/Timers
197 -------------------------------
199 Sometimes a tasklet or timer might want to share data with another
202 The Same Tasklet/Timer
203 ~~~~~~~~~~~~~~~~~~~~~~
205 Since a tasklet is never run on two CPUs at once, you don't need to
206 worry about your tasklet being reentrant (running twice at once), even
209 Different Tasklets/Timers
210 ~~~~~~~~~~~~~~~~~~~~~~~~~
212 If another tasklet/timer wants to share data with your tasklet or timer
213 , you will both need to use spin_lock() and
214 spin_unlock() calls. spin_lock_bh() is
215 unnecessary here, as you are already in a tasklet, and none will be run
218 Locking Between Softirqs
219 ------------------------
221 Often a softirq might want to share data with itself or a tasklet/timer.
226 The same softirq can run on the other CPUs: you can use a per-CPU array
227 (see `Per-CPU Data`_) for better performance. If you're
228 going so far as to use a softirq, you probably care about scalable
229 performance enough to justify the extra complexity.
231 You'll need to use spin_lock() and
232 spin_unlock() for shared data.
237 You'll need to use spin_lock() and
238 spin_unlock() for shared data, whether it be a timer,
239 tasklet, different softirq or the same or another softirq: any of them
240 could be running on a different CPU.
245 Hardware interrupts usually communicate with a tasklet or softirq.
246 Frequently this involves putting work in a queue, which the softirq will
249 Locking Between Hard IRQ and Softirqs/Tasklets
250 ----------------------------------------------
252 If a hardware irq handler shares data with a softirq, you have two
253 concerns. Firstly, the softirq processing can be interrupted by a
254 hardware interrupt, and secondly, the critical region could be entered
255 by a hardware interrupt on another CPU. This is where
256 spin_lock_irq() is used. It is defined to disable
257 interrupts on that cpu, then grab the lock.
258 spin_unlock_irq() does the reverse.
260 The irq handler does not need to use spin_lock_irq(), because
261 the softirq cannot run while the irq handler is running: it can use
262 spin_lock(), which is slightly faster. The only exception
263 would be if a different hardware irq handler uses the same lock:
264 spin_lock_irq() will stop that from interrupting us.
266 This works perfectly for UP as well: the spin lock vanishes, and this
267 macro simply becomes local_irq_disable()
268 (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
271 spin_lock_irqsave() (``include/linux/spinlock.h``) is a
272 variant which saves whether interrupts were on or off in a flags word,
273 which is passed to spin_unlock_irqrestore(). This means
274 that the same code can be used inside an hard irq handler (where
275 interrupts are already off) and in softirqs (where the irq disabling is
278 Note that softirqs (and hence tasklets and timers) are run on return
279 from hardware interrupts, so spin_lock_irq() also stops
280 these. In that sense, spin_lock_irqsave() is the most
281 general and powerful locking function.
283 Locking Between Two Hard IRQ Handlers
284 -------------------------------------
286 It is rare to have to share data between two IRQ handlers, but if you
287 do, spin_lock_irqsave() should be used: it is
288 architecture-specific whether all interrupts are disabled inside irq
291 Cheat Sheet For Locking
292 =======================
294 Pete Zaitcev gives the following summary:
296 - If you are in a process context (any syscall) and want to lock other
297 process out, use a mutex. You can take a mutex and sleep
298 (``copy_from_user()`` or ``kmalloc(x,GFP_KERNEL)``).
300 - Otherwise (== data can be touched in an interrupt), use
301 spin_lock_irqsave() and
302 spin_unlock_irqrestore().
304 - Avoid holding spinlock for more than 5 lines of code and across any
305 function call (except accessors like readb()).
307 Table of Minimum Requirements
308 -----------------------------
310 The following table lists the **minimum** locking requirements between
311 various contexts. In some cases, the same context can only be running on
312 one CPU at a time, so no locking is required for that context (eg. a
313 particular thread can only run on one CPU at a time, but if it needs
314 shares data with another thread, locking is required).
316 Remember the advice above: you can always use
317 spin_lock_irqsave(), which is a superset of all other
320 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
321 . IRQ Handler A IRQ Handler B Softirq A Softirq B Tasklet A Tasklet B Timer A Timer B User Context A User Context B
322 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
324 IRQ Handler B SLIS None
326 Softirq B SLI SLI SL SL
327 Tasklet A SLI SLI SL SL None
328 Tasklet B SLI SLI SL SL SL None
329 Timer A SLI SLI SL SL SL SL None
330 Timer B SLI SLI SL SL SL SL SL None
331 User Context A SLI SLI SLBH SLBH SLBH SLBH SLBH SLBH None
332 User Context B SLI SLI SLBH SLBH SLBH SLBH SLBH SLBH MLI None
333 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
335 Table: Table of Locking Requirements
337 +--------+----------------------------+
338 | SLIS | spin_lock_irqsave |
339 +--------+----------------------------+
340 | SLI | spin_lock_irq |
341 +--------+----------------------------+
343 +--------+----------------------------+
344 | SLBH | spin_lock_bh |
345 +--------+----------------------------+
346 | MLI | mutex_lock_interruptible |
347 +--------+----------------------------+
349 Table: Legend for Locking Requirements Table
351 The trylock Functions
352 =====================
354 There are functions that try to acquire a lock only once and immediately
355 return a value telling about success or failure to acquire the lock.
356 They can be used if you need no access to the data protected with the
357 lock when some other thread is holding the lock. You should acquire the
358 lock later if you then need access to the data protected with the lock.
360 spin_trylock() does not spin but returns non-zero if it
361 acquires the spinlock on the first try or 0 if not. This function can be
362 used in all contexts like spin_lock(): you must have
363 disabled the contexts that might interrupt you and acquire the spin
366 mutex_trylock() does not suspend your task but returns
367 non-zero if it could lock the mutex on the first try or 0 if not. This
368 function cannot be safely used in hardware or software interrupt
369 contexts despite not sleeping.
374 Let's step through a simple example: a cache of number to name mappings.
375 The cache keeps a count of how often each of the objects is used, and
376 when it gets full, throws out the least used one.
381 For our first example, we assume that all operations are in user context
382 (ie. from system calls), so we can sleep. This means we can use a mutex
383 to protect the cache and all the objects within it. Here's the code::
385 #include <linux/list.h>
386 #include <linux/slab.h>
387 #include <linux/string.h>
388 #include <linux/mutex.h>
389 #include <asm/errno.h>
393 struct list_head list;
399 /* Protects the cache, cache_num, and the objects within it */
400 static DEFINE_MUTEX(cache_lock);
401 static LIST_HEAD(cache);
402 static unsigned int cache_num = 0;
403 #define MAX_CACHE_SIZE 10
405 /* Must be holding cache_lock */
406 static struct object *__cache_find(int id)
410 list_for_each_entry(i, &cache, list)
418 /* Must be holding cache_lock */
419 static void __cache_delete(struct object *obj)
422 list_del(&obj->list);
427 /* Must be holding cache_lock */
428 static void __cache_add(struct object *obj)
430 list_add(&obj->list, &cache);
431 if (++cache_num > MAX_CACHE_SIZE) {
432 struct object *i, *outcast = NULL;
433 list_for_each_entry(i, &cache, list) {
434 if (!outcast || i->popularity < outcast->popularity)
437 __cache_delete(outcast);
441 int cache_add(int id, const char *name)
445 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
448 strscpy(obj->name, name, sizeof(obj->name));
452 mutex_lock(&cache_lock);
454 mutex_unlock(&cache_lock);
458 void cache_delete(int id)
460 mutex_lock(&cache_lock);
461 __cache_delete(__cache_find(id));
462 mutex_unlock(&cache_lock);
465 int cache_find(int id, char *name)
470 mutex_lock(&cache_lock);
471 obj = __cache_find(id);
474 strcpy(name, obj->name);
476 mutex_unlock(&cache_lock);
480 Note that we always make sure we have the cache_lock when we add,
481 delete, or look up the cache: both the cache infrastructure itself and
482 the contents of the objects are protected by the lock. In this case it's
483 easy, since we copy the data for the user, and never let them access the
486 There is a slight (and common) optimization here: in
487 cache_add() we set up the fields of the object before
488 grabbing the lock. This is safe, as no-one else can access it until we
491 Accessing From Interrupt Context
492 --------------------------------
494 Now consider the case where cache_find() can be called
495 from interrupt context: either a hardware interrupt or a softirq. An
496 example would be a timer which deletes object from the cache.
498 The change is shown below, in standard patch format: the ``-`` are lines
499 which are taken away, and the ``+`` are lines which are added.
503 --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
504 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
509 -static DEFINE_MUTEX(cache_lock);
510 +static DEFINE_SPINLOCK(cache_lock);
511 static LIST_HEAD(cache);
512 static unsigned int cache_num = 0;
513 #define MAX_CACHE_SIZE 10
515 int cache_add(int id, const char *name)
518 + unsigned long flags;
520 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
526 - mutex_lock(&cache_lock);
527 + spin_lock_irqsave(&cache_lock, flags);
529 - mutex_unlock(&cache_lock);
530 + spin_unlock_irqrestore(&cache_lock, flags);
534 void cache_delete(int id)
536 - mutex_lock(&cache_lock);
537 + unsigned long flags;
539 + spin_lock_irqsave(&cache_lock, flags);
540 __cache_delete(__cache_find(id));
541 - mutex_unlock(&cache_lock);
542 + spin_unlock_irqrestore(&cache_lock, flags);
545 int cache_find(int id, char *name)
549 + unsigned long flags;
551 - mutex_lock(&cache_lock);
552 + spin_lock_irqsave(&cache_lock, flags);
553 obj = __cache_find(id);
556 strcpy(name, obj->name);
558 - mutex_unlock(&cache_lock);
559 + spin_unlock_irqrestore(&cache_lock, flags);
563 Note that the spin_lock_irqsave() will turn off
564 interrupts if they are on, otherwise does nothing (if we are already in
565 an interrupt handler), hence these functions are safe to call from any
568 Unfortunately, cache_add() calls kmalloc()
569 with the ``GFP_KERNEL`` flag, which is only legal in user context. I
570 have assumed that cache_add() is still only called in
571 user context, otherwise this should become a parameter to
574 Exposing Objects Outside This File
575 ----------------------------------
577 If our objects contained more information, it might not be sufficient to
578 copy the information in and out: other parts of the code might want to
579 keep pointers to these objects, for example, rather than looking up the
580 id every time. This produces two problems.
582 The first problem is that we use the ``cache_lock`` to protect objects:
583 we'd need to make this non-static so the rest of the code can use it.
584 This makes locking trickier, as it is no longer all in one place.
586 The second problem is the lifetime problem: if another structure keeps a
587 pointer to an object, it presumably expects that pointer to remain
588 valid. Unfortunately, this is only guaranteed while you hold the lock,
589 otherwise someone might call cache_delete() and even
590 worse, add another object, re-using the same address.
592 As there is only one lock, you can't hold it forever: no-one else would
595 The solution to this problem is to use a reference count: everyone who
596 has a pointer to the object increases it when they first get the object,
597 and drops the reference count when they're finished with it. Whoever
598 drops it to zero knows it is unused, and can actually delete it.
602 --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
603 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
607 struct list_head list;
608 + unsigned int refcnt;
613 static unsigned int cache_num = 0;
614 #define MAX_CACHE_SIZE 10
616 +static void __object_put(struct object *obj)
618 + if (--obj->refcnt == 0)
622 +static void __object_get(struct object *obj)
627 +void object_put(struct object *obj)
629 + unsigned long flags;
631 + spin_lock_irqsave(&cache_lock, flags);
633 + spin_unlock_irqrestore(&cache_lock, flags);
636 +void object_get(struct object *obj)
638 + unsigned long flags;
640 + spin_lock_irqsave(&cache_lock, flags);
642 + spin_unlock_irqrestore(&cache_lock, flags);
645 /* Must be holding cache_lock */
646 static struct object *__cache_find(int id)
651 list_del(&obj->list);
657 strscpy(obj->name, name, sizeof(obj->name));
660 + obj->refcnt = 1; /* The cache holds a reference */
662 spin_lock_irqsave(&cache_lock, flags);
665 spin_unlock_irqrestore(&cache_lock, flags);
668 -int cache_find(int id, char *name)
669 +struct object *cache_find(int id)
675 spin_lock_irqsave(&cache_lock, flags);
676 obj = __cache_find(id);
679 - strcpy(name, obj->name);
683 spin_unlock_irqrestore(&cache_lock, flags);
688 We encapsulate the reference counting in the standard 'get' and 'put'
689 functions. Now we can return the object itself from
690 cache_find() which has the advantage that the user can
691 now sleep holding the object (eg. to copy_to_user() to
694 The other point to note is that I said a reference should be held for
695 every pointer to the object: thus the reference count is 1 when first
696 inserted into the cache. In some versions the framework does not hold a
697 reference count, but they are more complicated.
699 Using Atomic Operations For The Reference Count
700 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
702 In practice, :c:type:`atomic_t` would usually be used for refcnt. There are a
703 number of atomic operations defined in ``include/asm/atomic.h``: these
704 are guaranteed to be seen atomically from all CPUs in the system, so no
705 lock is required. In this case, it is simpler than using spinlocks,
706 although for anything non-trivial using spinlocks is clearer. The
707 atomic_inc() and atomic_dec_and_test()
708 are used instead of the standard increment and decrement operators, and
709 the lock is no longer used to protect the reference count itself.
713 --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
714 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
718 struct list_head list;
719 - unsigned int refcnt;
725 static unsigned int cache_num = 0;
726 #define MAX_CACHE_SIZE 10
728 -static void __object_put(struct object *obj)
730 - if (--obj->refcnt == 0)
734 -static void __object_get(struct object *obj)
739 void object_put(struct object *obj)
741 - unsigned long flags;
743 - spin_lock_irqsave(&cache_lock, flags);
745 - spin_unlock_irqrestore(&cache_lock, flags);
746 + if (atomic_dec_and_test(&obj->refcnt))
750 void object_get(struct object *obj)
752 - unsigned long flags;
754 - spin_lock_irqsave(&cache_lock, flags);
756 - spin_unlock_irqrestore(&cache_lock, flags);
757 + atomic_inc(&obj->refcnt);
760 /* Must be holding cache_lock */
764 list_del(&obj->list);
771 strscpy(obj->name, name, sizeof(obj->name));
774 - obj->refcnt = 1; /* The cache holds a reference */
775 + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
777 spin_lock_irqsave(&cache_lock, flags);
780 spin_lock_irqsave(&cache_lock, flags);
781 obj = __cache_find(id);
785 spin_unlock_irqrestore(&cache_lock, flags);
789 Protecting The Objects Themselves
790 ---------------------------------
792 In these examples, we assumed that the objects (except the reference
793 counts) never changed once they are created. If we wanted to allow the
794 name to change, there are three possibilities:
796 - You can make ``cache_lock`` non-static, and tell people to grab that
797 lock before changing the name in any object.
799 - You can provide a cache_obj_rename() which grabs this
800 lock and changes the name for the caller, and tell everyone to use
803 - You can make the ``cache_lock`` protect only the cache itself, and
804 use another lock to protect the name.
806 Theoretically, you can make the locks as fine-grained as one lock for
807 every field, for every object. In practice, the most common variants
810 - One lock which protects the infrastructure (the ``cache`` list in
811 this example) and all the objects. This is what we have done so far.
813 - One lock which protects the infrastructure (including the list
814 pointers inside the objects), and one lock inside the object which
815 protects the rest of that object.
817 - Multiple locks to protect the infrastructure (eg. one lock per hash
818 chain), possibly with a separate per-object lock.
820 Here is the "lock-per-object" implementation:
824 --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
825 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
830 + /* These two protected by cache_lock. */
831 struct list_head list;
836 + /* Doesn't change once created. */
839 + spinlock_t lock; /* Protects the name */
844 static DEFINE_SPINLOCK(cache_lock);
848 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
849 + spin_lock_init(&obj->lock);
851 spin_lock_irqsave(&cache_lock, flags);
854 Note that I decide that the popularity count should be protected by the
855 ``cache_lock`` rather than the per-object lock: this is because it (like
856 the :c:type:`struct list_head <list_head>` inside the object)
857 is logically part of the infrastructure. This way, I don't need to grab
858 the lock of every object in __cache_add() when seeking
861 I also decided that the id member is unchangeable, so I don't need to
862 grab each object lock in __cache_find() to examine the
863 id: the object lock is only used by a caller who wants to read or write
866 Note also that I added a comment describing what data was protected by
867 which locks. This is extremely important, as it describes the runtime
868 behavior of the code, and can be hard to gain from just reading. And as
869 Alan Cox says, “Lock data, not code”.
874 Deadlock: Simple and Advanced
875 -----------------------------
877 There is a coding bug where a piece of code tries to grab a spinlock
878 twice: it will spin forever, waiting for the lock to be released
879 (spinlocks, rwlocks and mutexes are not recursive in Linux). This is
880 trivial to diagnose: not a
881 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
883 For a slightly more complex case, imagine you have a region shared by a
884 softirq and user context. If you use a spin_lock() call
885 to protect it, it is possible that the user context will be interrupted
886 by the softirq while it holds the lock, and the softirq will then spin
887 forever trying to get the same lock.
889 Both of these are called deadlock, and as shown above, it can occur even
890 with a single CPU (although not on UP compiles, since spinlocks vanish
891 on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data
892 corruption in the second example).
894 This complete lockup is easy to diagnose: on SMP boxes the watchdog
895 timer or compiling with ``DEBUG_SPINLOCK`` set
896 (``include/linux/spinlock.h``) will show this up immediately when it
899 A more complex problem is the so-called 'deadly embrace', involving two
900 or more locks. Say you have a hash table: each entry in the table is a
901 spinlock, and a chain of hashed objects. Inside a softirq handler, you
902 sometimes want to alter an object from one place in the hash to another:
903 you grab the spinlock of the old hash chain and the spinlock of the new
904 hash chain, and delete the object from the old one, and insert it in the
907 There are two problems here. First, if your code ever tries to move the
908 object to the same chain, it will deadlock with itself as it tries to
909 lock it twice. Secondly, if the same softirq on another CPU is trying to
910 move another object in the reverse direction, the following could
913 +-----------------------+-----------------------+
915 +=======================+=======================+
916 | Grab lock A -> OK | Grab lock B -> OK |
917 +-----------------------+-----------------------+
918 | Grab lock B -> spin | Grab lock A -> spin |
919 +-----------------------+-----------------------+
923 The two CPUs will spin forever, waiting for the other to give up their
924 lock. It will look, smell, and feel like a crash.
929 Textbooks will tell you that if you always lock in the same order, you
930 will never get this kind of deadlock. Practice will tell you that this
931 approach doesn't scale: when I create a new lock, I don't understand
932 enough of the kernel to figure out where in the 5000 lock hierarchy it
935 The best locks are encapsulated: they never get exposed in headers, and
936 are never held around calls to non-trivial functions outside the same
937 file. You can read through this code and see that it will never
938 deadlock, because it never tries to grab another lock while it has that
939 one. People using your code don't even need to know you are using a
942 A classic problem here is when you provide callbacks or hooks: if you
943 call these with the lock held, you risk simple deadlock, or a deadly
944 embrace (who knows what the callback will do?).
946 Overzealous Prevention Of Deadlocks
947 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
949 Deadlocks are problematic, but not as bad as data corruption. Code which
950 grabs a read lock, searches a list, fails to find what it wants, drops
951 the read lock, grabs a write lock and inserts the object has a race
954 Racing Timers: A Kernel Pastime
955 -------------------------------
957 Timers can produce their own special problems with races. Consider a
958 collection of objects (list, hash, etc) where each object has a timer
959 which is due to destroy it.
961 If you want to destroy the entire collection (say on module removal),
962 you might do the following::
964 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
965 HUNGARIAN NOTATION */
966 spin_lock_bh(&list_lock);
969 struct foo *next = list->next;
970 timer_delete(&list->timer);
975 spin_unlock_bh(&list_lock);
978 Sooner or later, this will crash on SMP, because a timer can have just
979 gone off before the spin_lock_bh(), and it will only get
980 the lock after we spin_unlock_bh(), and then try to free
981 the element (which has already been freed!).
983 This can be avoided by checking the result of
984 timer_delete(): if it returns 1, the timer has been deleted.
985 If 0, it means (in this case) that it is currently running, so we can
989 spin_lock_bh(&list_lock);
992 struct foo *next = list->next;
993 if (!timer_delete(&list->timer)) {
994 /* Give timer a chance to delete this */
995 spin_unlock_bh(&list_lock);
1002 spin_unlock_bh(&list_lock);
1005 Another common problem is deleting timers which restart themselves (by
1006 calling add_timer() at the end of their timer function).
1007 Because this is a fairly common case which is prone to races, you should
1008 use timer_delete_sync() (``include/linux/timer.h``) to handle this case.
1010 Before freeing a timer, timer_shutdown() or timer_shutdown_sync() should be
1011 called which will keep it from being rearmed. Any subsequent attempt to
1012 rearm the timer will be silently ignored by the core code.
1018 There are three main things to worry about when considering speed of
1019 some code which does locking. First is concurrency: how many things are
1020 going to be waiting while someone else is holding a lock. Second is the
1021 time taken to actually acquire and release an uncontended lock. Third is
1022 using fewer, or smarter locks. I'm assuming that the lock is used fairly
1023 often: otherwise, you wouldn't be concerned about efficiency.
1025 Concurrency depends on how long the lock is usually held: you should
1026 hold the lock for as long as needed, but no longer. In the cache
1027 example, we always create the object without the lock held, and then
1028 grab the lock only when we are ready to insert it in the list.
1030 Acquisition times depend on how much damage the lock operations do to
1031 the pipeline (pipeline stalls) and how likely it is that this CPU was
1032 the last one to grab the lock (ie. is the lock cache-hot for this CPU):
1033 on a machine with more CPUs, this likelihood drops fast. Consider a
1034 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic
1035 increment takes about 58ns, a lock which is cache-hot on this CPU takes
1036 160ns, and a cacheline transfer from another CPU takes an additional 170
1037 to 360ns. (These figures from Paul McKenney's `Linux Journal RCU
1038 article <http://www.linuxjournal.com/article.php?sid=6993>`__).
1040 These two aims conflict: holding a lock for a short time might be done
1041 by splitting locks into parts (such as in our final per-object-lock
1042 example), but this increases the number of lock acquisitions, and the
1043 results are often slower than having a single lock. This is another
1044 reason to advocate locking simplicity.
1046 The third concern is addressed below: there are some methods to reduce
1047 the amount of locking which needs to be done.
1049 Read/Write Lock Variants
1050 ------------------------
1052 Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and
1053 :c:type:`struct rw_semaphore <rw_semaphore>`. These divide
1054 users into two classes: the readers and the writers. If you are only
1055 reading the data, you can get a read lock, but to write to the data you
1056 need the write lock. Many people can hold a read lock, but a writer must
1059 If your code divides neatly along reader/writer lines (as our cache code
1060 does), and the lock is held by readers for significant lengths of time,
1061 using these locks can help. They are slightly slower than the normal
1062 locks though, so in practice ``rwlock_t`` is not usually worthwhile.
1064 Avoiding Locks: Read Copy Update
1065 --------------------------------
1067 There is a special method of read/write locking called Read Copy Update.
1068 Using RCU, the readers can avoid taking a lock altogether: as we expect
1069 our cache to be read more often than updated (otherwise the cache is a
1070 waste of time), it is a candidate for this optimization.
1072 How do we get rid of read locks? Getting rid of read locks means that
1073 writers may be changing the list underneath the readers. That is
1074 actually quite simple: we can read a linked list while an element is
1075 being added if the writer adds the element very carefully. For example,
1076 adding ``new`` to a single linked list called ``list``::
1078 new->next = list->next;
1083 The wmb() is a write memory barrier. It ensures that the
1084 first operation (setting the new element's ``next`` pointer) is complete
1085 and will be seen by all CPUs, before the second operation is (putting
1086 the new element into the list). This is important, since modern
1087 compilers and modern CPUs can both reorder instructions unless told
1088 otherwise: we want a reader to either not see the new element at all, or
1089 see the new element with the ``next`` pointer correctly pointing at the
1092 Fortunately, there is a function to do this for standard
1093 :c:type:`struct list_head <list_head>` lists:
1094 list_add_rcu() (``include/linux/list.h``).
1096 Removing an element from the list is even simpler: we replace the
1097 pointer to the old element with a pointer to its successor, and readers
1098 will either see it, or skip over it.
1102 list->next = old->next;
1105 There is list_del_rcu() (``include/linux/list.h``) which
1106 does this (the normal version poisons the old object, which we don't
1109 The reader must also be careful: some CPUs can look through the ``next``
1110 pointer to start reading the contents of the next element early, but
1111 don't realize that the pre-fetched contents is wrong when the ``next``
1112 pointer changes underneath them. Once again, there is a
1113 list_for_each_entry_rcu() (``include/linux/list.h``)
1114 to help you. Of course, writers can just use
1115 list_for_each_entry(), since there cannot be two
1116 simultaneous writers.
1118 Our final dilemma is this: when can we actually destroy the removed
1119 element? Remember, a reader might be stepping through this element in
1120 the list right now: if we free this element and the ``next`` pointer
1121 changes, the reader will jump off into garbage and crash. We need to
1122 wait until we know that all the readers who were traversing the list
1123 when we deleted the element are finished. We use
1124 call_rcu() to register a callback which will actually
1125 destroy the object once all pre-existing readers are finished.
1126 Alternatively, synchronize_rcu() may be used to block
1127 until all pre-existing are finished.
1129 But how does Read Copy Update know when the readers are finished? The
1130 method is this: firstly, the readers always traverse the list inside
1131 rcu_read_lock()/rcu_read_unlock() pairs:
1132 these simply disable preemption so the reader won't go to sleep while
1135 RCU then waits until every other CPU has slept at least once: since
1136 readers cannot sleep, we know that any readers which were traversing the
1137 list during the deletion are finished, and the callback is triggered.
1138 The real Read Copy Update code is a little more optimized than this, but
1139 this is the fundamental idea.
1143 --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1144 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1146 #include <linux/list.h>
1147 #include <linux/slab.h>
1148 #include <linux/string.h>
1149 +#include <linux/rcupdate.h>
1150 #include <linux/mutex.h>
1151 #include <asm/errno.h>
1155 - /* These two protected by cache_lock. */
1156 + /* This is protected by RCU */
1157 struct list_head list;
1160 + struct rcu_head rcu;
1164 /* Doesn't change once created. */
1169 - list_for_each_entry(i, &cache, list) {
1170 + list_for_each_entry_rcu(i, &cache, list) {
1178 +/* Final discard done once we know no readers are looking. */
1179 +static void cache_delete_rcu(void *arg)
1184 /* Must be holding cache_lock */
1185 static void __cache_delete(struct object *obj)
1188 - list_del(&obj->list);
1190 + list_del_rcu(&obj->list);
1192 + call_rcu(&obj->rcu, cache_delete_rcu);
1195 /* Must be holding cache_lock */
1196 static void __cache_add(struct object *obj)
1198 - list_add(&obj->list, &cache);
1199 + list_add_rcu(&obj->list, &cache);
1200 if (++cache_num > MAX_CACHE_SIZE) {
1201 struct object *i, *outcast = NULL;
1202 list_for_each_entry(i, &cache, list) {
1203 @@ -104,12 +114,11 @@
1204 struct object *cache_find(int id)
1207 - unsigned long flags;
1209 - spin_lock_irqsave(&cache_lock, flags);
1211 obj = __cache_find(id);
1214 - spin_unlock_irqrestore(&cache_lock, flags);
1215 + rcu_read_unlock();
1219 Note that the reader will alter the popularity member in
1220 __cache_find(), and now it doesn't hold a lock. One
1221 solution would be to make it an ``atomic_t``, but for this usage, we
1222 don't really care about races: an approximate result is good enough, so
1225 The result is that cache_find() requires no
1226 synchronization with any other functions, so is almost as fast on SMP as
1229 There is a further optimization possible here: remember our original
1230 cache code, where there were no reference counts and the caller simply
1231 held the lock whenever using the object? This is still possible: if you
1232 hold the lock, no one can delete the object, so you don't need to get
1233 and put the reference count.
1235 Now, because the 'read lock' in RCU is simply disabling preemption, a
1236 caller which always has preemption disabled between calling
1237 cache_find() and object_put() does not
1238 need to actually get and put the reference count: we could expose
1239 __cache_find() by making it non-static, and such
1240 callers could simply call that.
1242 The benefit here is that the reference count is not written to: the
1243 object is not altered in any way, which is much faster on SMP machines
1249 Another technique for avoiding locking which is used fairly widely is to
1250 duplicate information for each CPU. For example, if you wanted to keep a
1251 count of a common condition, you could use a spin lock and a single
1252 counter. Nice and simple.
1254 If that was too slow (it's usually not, but if you've got a really big
1255 machine to test on and can show that it is), you could instead use a
1256 counter for each CPU, then none of them need an exclusive lock. See
1257 DEFINE_PER_CPU(), get_cpu_var() and
1258 put_cpu_var() (``include/linux/percpu.h``).
1260 Of particular use for simple per-cpu counters is the ``local_t`` type,
1261 and the cpu_local_inc() and related functions, which are
1262 more efficient than simple code on some architectures
1263 (``include/asm/local.h``).
1265 Note that there is no simple, reliable way of getting an exact value of
1266 such a counter, without introducing more locks. This is not a problem
1269 Data Which Mostly Used By An IRQ Handler
1270 ----------------------------------------
1272 If data is always accessed from within the same IRQ handler, you don't
1273 need a lock at all: the kernel already guarantees that the irq handler
1274 will not run simultaneously on multiple CPUs.
1276 Manfred Spraul points out that you can still do this, even if the data
1277 is very occasionally accessed in user context or softirqs/tasklets. The
1278 irq handler doesn't use a lock, and all other accesses are done as so::
1284 mutex_unlock(&lock);
1286 The disable_irq() prevents the irq handler from running
1287 (and waits for it to finish if it's currently running on other CPUs).
1288 The spinlock prevents any other accesses happening at the same time.
1289 Naturally, this is slower than just a spin_lock_irq()
1290 call, so it only makes sense if this type of access happens extremely
1293 What Functions Are Safe To Call From Interrupts?
1294 ================================================
1296 Many functions in the kernel sleep (ie. call schedule()) directly or
1297 indirectly: you can never call them while holding a spinlock, or with
1298 preemption disabled. This also means you need to be in user context:
1299 calling them from an interrupt is illegal.
1301 Some Functions Which Sleep
1302 --------------------------
1304 The most common ones are listed below, but you usually have to read the
1305 code to find out if other calls are safe. If everyone else who calls it
1306 can sleep, you probably need to be able to sleep, too. In particular,
1307 registration and deregistration functions usually expect to be called
1308 from user context, and can sleep.
1310 - Accesses to userspace:
1320 - kmalloc(GP_KERNEL) <kmalloc>`
1322 - mutex_lock_interruptible() and
1325 There is a mutex_trylock() which does not sleep.
1326 Still, it must not be used inside interrupt context since its
1327 implementation is not safe for that. mutex_unlock()
1328 will also never sleep. It cannot be used in interrupt context either
1329 since a mutex must be released by the same task that acquired it.
1331 Some Functions Which Don't Sleep
1332 --------------------------------
1334 Some functions are safe to call from any context, or holding almost any
1341 - add_timer() and timer_delete()
1346 .. kernel-doc:: include/linux/mutex.h
1349 .. kernel-doc:: kernel/locking/mutex.c
1355 .. kernel-doc:: kernel/futex/core.c
1358 .. kernel-doc:: kernel/futex/futex.h
1361 .. kernel-doc:: kernel/futex/pi.c
1364 .. kernel-doc:: kernel/futex/requeue.c
1367 .. kernel-doc:: kernel/futex/waitwake.c
1373 - ``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking
1374 tutorial in the kernel sources.
1376 - Unix Systems for Modern Architectures: Symmetric Multiprocessing and
1377 Caching for Kernel Programmers:
1379 Curt Schimmel's very good introduction to kernel level locking (not
1380 written for Linux, but nearly everything applies). The book is
1381 expensive, but really worth every penny to understand SMP locking.
1387 Thanks to Telsa Gwynne for DocBooking, neatening and adding style.
1389 Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras,
1390 Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev,
1391 James Morris, Robert Love, Paul McKenney, John Ashby for proofreading,
1392 correcting, flaming, commenting.
1394 Thanks to the cabal for having no influence on this document.
1400 Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user
1401 context inside the kernel would not preempt each other (ie. you had that
1402 CPU until you gave it up, except for interrupts). With the addition of
1403 ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher
1404 priority tasks can "cut in": spinlocks were changed to disable
1405 preemption, even on UP.
1408 Bottom Half: for historical reasons, functions with '_bh' in them often
1409 now refer to any software interrupt, e.g. spin_lock_bh()
1410 blocks any software interrupt on the current CPU. Bottom halves are
1411 deprecated, and will eventually be replaced by tasklets. Only one bottom
1412 half will be running at any time.
1414 Hardware Interrupt / Hardware IRQ
1415 Hardware interrupt request. in_hardirq() returns true in a
1416 hardware interrupt handler.
1419 Not user context: processing a hardware irq or software irq. Indicated
1420 by the in_interrupt() macro returning true.
1423 Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
1426 Software Interrupt / softirq
1427 Software interrupt handler. in_hardirq() returns false;
1428 in_softirq() returns true. Tasklets and softirqs both
1429 fall into the category of 'software interrupts'.
1431 Strictly speaking a softirq is one of up to 32 enumerated software
1432 interrupts which can run on multiple CPUs at once. Sometimes used to
1433 refer to tasklets as well (ie. all software interrupts).
1436 A dynamically-registrable software interrupt, which is guaranteed to
1437 only run on one CPU at a time.
1440 A dynamically-registrable software interrupt, which is run at (or close
1441 to) a given time. When running, it is just like a tasklet (in fact, they
1442 are called from the ``TIMER_SOFTIRQ``).
1445 Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).
1448 The kernel executing on behalf of a particular process (ie. a system
1449 call or trap) or kernel thread. You can tell which process with the
1450 ``current`` macro.) Not to be confused with userspace. Can be
1451 interrupted by software or hardware interrupts.
1454 A process executing its own code outside the kernel.