1 Semantics and Behavior of Atomic and
6 This document is intended to serve as a guide to Linux port
7 maintainers on how to implement atomic counter, bitops, and spinlock
10 The atomic_t type should be defined as a signed integer.
11 Also, it should be made opaque such that any kind of cast to a normal
12 C integer type will fail. Something like the following should
15 typedef struct { int counter; } atomic_t;
17 Historically, counter has been declared volatile. This is now discouraged.
18 See Documentation/volatile-considered-harmful.txt for the complete rationale.
20 local_t is very similar to atomic_t. If the counter is per CPU and only
21 updated by one CPU, local_t is probably more appropriate. Please see
22 Documentation/local_ops.txt for the semantics of local_t.
24 The first operations to implement for atomic_t's are the initializers and
27 #define ATOMIC_INIT(i) { (i) }
28 #define atomic_set(v, i) ((v)->counter = (i))
30 The first macro is used in definitions, such as:
32 static atomic_t my_counter = ATOMIC_INIT(1);
34 The initializer is atomic in that the return values of the atomic operations
35 are guaranteed to be correct reflecting the initialized value if the
36 initializer is used before runtime. If the initializer is used at runtime, a
37 proper implicit or explicit read memory barrier is needed before reading the
38 value with atomic_read from another thread.
40 The second interface can be used at runtime, as in:
42 struct foo { atomic_t counter; };
47 k = kmalloc(sizeof(*k), GFP_KERNEL);
50 atomic_set(&k->counter, 0);
52 The setting is atomic in that the return values of the atomic operations by
53 all threads are guaranteed to be correct reflecting either the value that has
54 been set with this operation or set with another operation. A proper implicit
55 or explicit memory barrier is needed before the value set with the operation
56 is guaranteed to be readable with atomic_read from another thread.
60 #define atomic_read(v) ((v)->counter)
62 which simply reads the counter value currently visible to the calling thread.
63 The read is atomic in that the return value is guaranteed to be one of the
64 values initialized or modified with the interface operations if a proper
65 implicit or explicit memory barrier is used after possible runtime
66 initialization by any other thread and the value is modified only with the
67 interface operations. atomic_read does not guarantee that the runtime
68 initialization by any other thread is visible yet, so the user of the
69 interface must take care of that with a proper implicit or explicit memory
72 *** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***
74 Some architectures may choose to use the volatile keyword, barriers, or inline
75 assembly to guarantee some degree of immediacy for atomic_read() and
76 atomic_set(). This is not uniformly guaranteed, and may change in the future,
77 so all users of atomic_t should treat atomic_read() and atomic_set() as simple
78 C statements that may be reordered or optimized away entirely by the compiler
79 or processor, and explicitly invoke the appropriate compiler and/or memory
80 barrier for each use case. Failure to do so will result in code that may
81 suddenly break when used with different architectures or compiler
82 optimizations, or even changes in unrelated code which changes how the
83 compiler optimizes the section accessing atomic_t variables.
85 *** YOU HAVE BEEN WARNED! ***
87 Properly aligned pointers, longs, ints, and chars (and unsigned
88 equivalents) may be atomically loaded from and stored to in the same
89 sense as described for atomic_read() and atomic_set(). The ACCESS_ONCE()
90 macro should be used to prevent the compiler from using optimizations
91 that might otherwise optimize accesses out of existence on the one hand,
92 or that might create unsolicited accesses on the other.
94 For example consider the following code:
99 If the compiler can prove that do_something() does not store to the
100 variable a, then the compiler is within its rights transforming this to
108 If you don't want the compiler to do this (and you probably don't), then
109 you should use something like the following:
111 while (ACCESS_ONCE(a) < 0)
114 Alternatively, you could place a barrier() call in the loop.
116 For another example, consider the following code:
119 do_something_with(tmp_a);
120 do_something_else_with(tmp_a);
122 If the compiler can prove that do_something_with() does not store to the
123 variable a, then the compiler is within its rights to manufacture an
124 additional load as follows:
127 do_something_with(tmp_a);
129 do_something_else_with(tmp_a);
131 This could fatally confuse your code if it expected the same value
132 to be passed to do_something_with() and do_something_else_with().
134 The compiler would be likely to manufacture this additional load if
135 do_something_with() was an inline function that made very heavy use
136 of registers: reloading from variable a could save a flush to the
137 stack and later reload. To prevent the compiler from attacking your
138 code in this manner, write the following:
140 tmp_a = ACCESS_ONCE(a);
141 do_something_with(tmp_a);
142 do_something_else_with(tmp_a);
144 For a final example, consider the following code, assuming that the
145 variable a is set at boot time before the second CPU is brought online
146 and never changed later, so that memory barriers are not needed:
153 The compiler is within its rights to manufacture an additional store
154 by transforming the above code into the following:
160 This could come as a fatal surprise to other code running concurrently
161 that expected b to never have the value 42 if a was zero. To prevent
162 the compiler from doing this, write something like:
169 Don't even -think- about doing this without proper use of memory barriers,
170 locks, or atomic operations if variable a can change at runtime!
172 *** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! ***
174 Now, we move onto the atomic operation interfaces typically implemented with
175 the help of assembly code.
177 void atomic_add(int i, atomic_t *v);
178 void atomic_sub(int i, atomic_t *v);
179 void atomic_inc(atomic_t *v);
180 void atomic_dec(atomic_t *v);
182 These four routines add and subtract integral values to/from the given
183 atomic_t value. The first two routines pass explicit integers by
184 which to make the adjustment, whereas the latter two use an implicit
185 adjustment value of "1".
187 One very important aspect of these two routines is that they DO NOT
188 require any explicit memory barriers. They need only perform the
189 atomic_t counter update in an SMP safe manner.
193 int atomic_inc_return(atomic_t *v);
194 int atomic_dec_return(atomic_t *v);
196 These routines add 1 and subtract 1, respectively, from the given
197 atomic_t and return the new counter value after the operation is
200 Unlike the above routines, it is required that explicit memory
201 barriers are performed before and after the operation. It must be
202 done such that all memory operations before and after the atomic
203 operation calls are strongly ordered with respect to the atomic
206 For example, it should behave as if a smp_mb() call existed both
207 before and after the atomic operation.
209 If the atomic instructions used in an implementation provide explicit
210 memory barrier semantics which satisfy the above requirements, that is
215 int atomic_add_return(int i, atomic_t *v);
216 int atomic_sub_return(int i, atomic_t *v);
218 These behave just like atomic_{inc,dec}_return() except that an
219 explicit counter adjustment is given instead of the implicit "1".
220 This means that like atomic_{inc,dec}_return(), the memory barrier
221 semantics are required.
225 int atomic_inc_and_test(atomic_t *v);
226 int atomic_dec_and_test(atomic_t *v);
228 These two routines increment and decrement by 1, respectively, the
229 given atomic counter. They return a boolean indicating whether the
230 resulting counter value was zero or not.
232 It requires explicit memory barrier semantics around the operation as
235 int atomic_sub_and_test(int i, atomic_t *v);
237 This is identical to atomic_dec_and_test() except that an explicit
238 decrement is given instead of the implicit "1". It requires explicit
239 memory barrier semantics around the operation.
241 int atomic_add_negative(int i, atomic_t *v);
243 The given increment is added to the given atomic counter value. A
244 boolean is return which indicates whether the resulting counter value
245 is negative. It requires explicit memory barrier semantics around the
250 int atomic_xchg(atomic_t *v, int new);
252 This performs an atomic exchange operation on the atomic variable v, setting
253 the given new value. It returns the old value that the atomic variable v had
254 just before the operation.
256 int atomic_cmpxchg(atomic_t *v, int old, int new);
258 This performs an atomic compare exchange operation on the atomic value v,
259 with the given old and new values. Like all atomic_xxx operations,
260 atomic_cmpxchg will only satisfy its atomicity semantics as long as all
261 other accesses of *v are performed through atomic_xxx operations.
263 atomic_cmpxchg requires explicit memory barriers around the operation.
265 The semantics for atomic_cmpxchg are the same as those defined for 'cas'
270 int atomic_add_unless(atomic_t *v, int a, int u);
272 If the atomic value v is not equal to u, this function adds a to v, and
273 returns non zero. If v is equal to u then it returns zero. This is done as
276 atomic_add_unless requires explicit memory barriers around the operation
277 unless it fails (returns 0).
279 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
282 If a caller requires memory barrier semantics around an atomic_t
283 operation which does not return a value, a set of interfaces are
284 defined which accomplish this:
286 void smp_mb__before_atomic_dec(void);
287 void smp_mb__after_atomic_dec(void);
288 void smp_mb__before_atomic_inc(void);
289 void smp_mb__after_atomic_inc(void);
291 For example, smp_mb__before_atomic_dec() can be used like so:
294 smp_mb__before_atomic_dec();
295 atomic_dec(&obj->ref_count);
297 It makes sure that all memory operations preceding the atomic_dec()
298 call are strongly ordered with respect to the atomic counter
299 operation. In the above example, it guarantees that the assignment of
300 "1" to obj->dead will be globally visible to other cpus before the
301 atomic counter decrement.
303 Without the explicit smp_mb__before_atomic_dec() call, the
304 implementation could legally allow the atomic counter update visible
305 to other cpus before the "obj->dead = 1;" assignment.
307 The other three interfaces listed are used to provide explicit
308 ordering with respect to memory operations after an atomic_dec() call
309 (smp_mb__after_atomic_dec()) and around atomic_inc() calls
310 (smp_mb__{before,after}_atomic_inc()).
312 A missing memory barrier in the cases where they are required by the
313 atomic_t implementation above can have disastrous results. Here is
314 an example, which follows a pattern occurring frequently in the Linux
315 kernel. It is the use of atomic counters to implement reference
316 counting, and it works such that once the counter falls to zero it can
317 be guaranteed that no other entity can be accessing the object:
319 static void obj_list_add(struct obj *obj, struct list_head *head)
322 list_add(&obj->list, head);
325 static void obj_list_del(struct obj *obj)
327 list_del(&obj->list);
331 static void obj_destroy(struct obj *obj)
337 struct obj *obj_list_peek(struct list_head *head)
339 if (!list_empty(head)) {
342 obj = list_entry(head->next, struct obj, list);
343 atomic_inc(&obj->refcnt);
353 spin_lock(&global_list_lock);
354 obj = obj_list_peek(&global_list);
355 spin_unlock(&global_list_lock);
359 if (atomic_dec_and_test(&obj->refcnt))
364 void obj_timeout(struct obj *obj)
366 spin_lock(&global_list_lock);
368 spin_unlock(&global_list_lock);
370 if (atomic_dec_and_test(&obj->refcnt))
374 (This is a simplification of the ARP queue management in the
375 generic neighbour discover code of the networking. Olaf Kirch
376 found a bug wrt. memory barriers in kfree_skb() that exposed
377 the atomic_t memory barrier requirements quite clearly.)
379 Given the above scheme, it must be the case that the obj->active
380 update done by the obj list deletion be visible to other processors
381 before the atomic counter decrement is performed.
383 Otherwise, the counter could fall to zero, yet obj->active would still
384 be set, thus triggering the assertion in obj_destroy(). The error
385 sequence looks like this:
388 obj_poke() obj_timeout()
389 obj = obj_list_peek();
390 ... gains ref to obj, refcnt=2
393 ... visibility delayed ...
394 atomic_dec_and_test()
395 ... refcnt drops to 1 ...
396 atomic_dec_and_test()
397 ... refcount drops to 0 ...
399 BUG() triggers since obj->active
401 obj->active update visibility occurs
403 With the memory barrier semantics required of the atomic_t operations
404 which return values, the above sequence of memory visibility can never
405 happen. Specifically, in the above case the atomic_dec_and_test()
406 counter decrement would not become globally visible until the
407 obj->active update does.
409 As a historical note, 32-bit Sparc used to only allow usage of
410 24-bits of its atomic_t type. This was because it used 8 bits
411 as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
412 type instruction. However, 32-bit Sparc has since been moved over
413 to a "hash table of spinlocks" scheme, that allows the full 32-bit
414 counter to be realized. Essentially, an array of spinlocks are
415 indexed into based upon the address of the atomic_t being operated
416 on, and that lock protects the atomic operation. Parisc uses the
419 Another note is that the atomic_t operations returning values are
420 extremely slow on an old 386.
422 We will now cover the atomic bitmask operations. You will find that
423 their SMP and memory barrier semantics are similar in shape and scope
424 to the atomic_t ops above.
426 Native atomic bit operations are defined to operate on objects aligned
427 to the size of an "unsigned long" C data type, and are least of that
428 size. The endianness of the bits within each "unsigned long" are the
429 native endianness of the cpu.
431 void set_bit(unsigned long nr, volatile unsigned long *addr);
432 void clear_bit(unsigned long nr, volatile unsigned long *addr);
433 void change_bit(unsigned long nr, volatile unsigned long *addr);
435 These routines set, clear, and change, respectively, the bit number
436 indicated by "nr" on the bit mask pointed to by "ADDR".
438 They must execute atomically, yet there are no implicit memory barrier
439 semantics required of these interfaces.
441 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
442 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
443 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
445 Like the above, except that these routines return a boolean which
446 indicates whether the changed bit was set _BEFORE_ the atomic bit
449 WARNING! It is incredibly important that the value be a boolean,
450 ie. "0" or "1". Do not try to be fancy and save a few instructions by
451 declaring the above to return "long" and just returning something like
452 "old_val & mask" because that will not work.
454 For one thing, this return value gets truncated to int in many code
455 paths using these interfaces, so on 64-bit if the bit is set in the
456 upper 32-bits then testers will never see that.
458 One great example of where this problem crops up are the thread_info
459 flag operations. Routines such as test_and_set_ti_thread_flag() chop
460 the return value into an int. There are other places where things
461 like this occur as well.
463 These routines, like the atomic_t counter operations returning values,
464 require explicit memory barrier semantics around their execution. All
465 memory operations before the atomic bit operation call must be made
466 visible globally before the atomic bit operation is made visible.
467 Likewise, the atomic bit operation must be visible globally before any
468 subsequent memory operation is made visible. For example:
471 if (test_and_set_bit(0, &obj->flags))
475 The implementation of test_and_set_bit() must guarantee that
476 "obj->dead = 1;" is visible to cpus before the atomic memory operation
477 done by test_and_set_bit() becomes visible. Likewise, the atomic
478 memory operation done by test_and_set_bit() must become visible before
479 "obj->killed = 1;" is visible.
481 Finally there is the basic operation:
483 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
485 Which returns a boolean indicating if bit "nr" is set in the bitmask
486 pointed to by "addr".
488 If explicit memory barriers are required around clear_bit() (which
489 does not return a value, and thus does not need to provide memory
490 barrier semantics), two interfaces are provided:
492 void smp_mb__before_clear_bit(void);
493 void smp_mb__after_clear_bit(void);
495 They are used as follows, and are akin to their atomic_t operation
498 /* All memory operations before this call will
499 * be globally visible before the clear_bit().
501 smp_mb__before_clear_bit();
504 /* The clear_bit() will be visible before all
505 * subsequent memory operations.
507 smp_mb__after_clear_bit();
509 There are two special bitops with lock barrier semantics (acquire/release,
510 same as spinlocks). These operate in the same way as their non-_lock/unlock
511 postfixed variants, except that they are to provide acquire/release semantics,
512 respectively. This means they can be used for bit_spin_trylock and
513 bit_spin_unlock type operations without specifying any more barriers.
515 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
516 void clear_bit_unlock(unsigned long nr, unsigned long *addr);
517 void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
519 The __clear_bit_unlock version is non-atomic, however it still implements
520 unlock barrier semantics. This can be useful if the lock itself is protecting
521 the other bits in the word.
523 Finally, there are non-atomic versions of the bitmask operations
524 provided. They are used in contexts where some other higher-level SMP
525 locking scheme is being used to protect the bitmask, and thus less
526 expensive non-atomic operations may be used in the implementation.
527 They have names similar to the above bitmask operation interfaces,
528 except that two underscores are prefixed to the interface name.
530 void __set_bit(unsigned long nr, volatile unsigned long *addr);
531 void __clear_bit(unsigned long nr, volatile unsigned long *addr);
532 void __change_bit(unsigned long nr, volatile unsigned long *addr);
533 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
534 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
535 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
537 These non-atomic variants also do not require any special memory
540 The routines xchg() and cmpxchg() need the same exact memory barriers
541 as the atomic and bit operations returning values.
543 Spinlocks and rwlocks have memory barrier expectations as well.
544 The rule to follow is simple:
546 1) When acquiring a lock, the implementation must make it globally
547 visible before any subsequent memory operation.
549 2) When releasing a lock, the implementation must make it such that
550 all previous memory operations are globally visible before the
553 Which finally brings us to _atomic_dec_and_lock(). There is an
554 architecture-neutral version implemented in lib/dec_and_lock.c,
555 but most platforms will wish to optimize this in assembler.
557 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
559 Atomically decrement the given counter, and if will drop to zero
560 atomically acquire the given spinlock and perform the decrement
561 of the counter to zero. If it does not drop to zero, do nothing
564 It is actually pretty simple to get the memory barrier correct.
565 Simply satisfy the spinlock grab requirements, which is make
566 sure the spinlock operation is globally visible before any
567 subsequent memory operation.
569 We can demonstrate this operation more clearly if we define
570 an abstract atomic operation:
572 long cas(long *mem, long old, long new);
574 "cas" stands for "compare and swap". It atomically:
576 1) Compares "old" with the value currently at "mem".
577 2) If they are equal, "new" is written to "mem".
578 3) Regardless, the current value at "mem" is returned.
580 As an example usage, here is what an atomic counter update
583 void example_atomic_inc(long *counter)
591 ret = cas(counter, old, new);
597 Let's use cas() in order to build a pseudo-C atomic_dec_and_lock():
599 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
606 old = atomic_read(atomic);
612 ret = cas(atomic, old, new);
624 Now, as far as memory barriers go, as long as spin_lock()
625 strictly orders all subsequent memory operations (including
626 the cas()) with respect to itself, things will be fine.
628 Said another way, _atomic_dec_and_lock() must guarantee that
629 a counter dropping to zero is never made visible before the
630 spinlock being acquired.
632 Note that this also means that for the case where the counter
633 is not dropping to zero, there are no memory ordering