2 On atomic types (atomic_t atomic64_t and atomic_long_t).
4 The atomic type provides an interface to the architecture's means of atomic
5 RMW operations between CPUs (atomic operations on MMIO are not supported and
6 can lead to fatal traps on some platforms).
11 The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
16 atomic_read(), atomic_set()
17 atomic_read_acquire(), atomic_set_release()
20 RMW atomic operations:
24 atomic_{add,sub,inc,dec}()
25 atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
26 atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
31 atomic_{and,or,xor,andnot}()
32 atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
37 atomic_xchg{,_relaxed,_acquire,_release}()
38 atomic_cmpxchg{,_relaxed,_acquire,_release}()
39 atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
42 Reference count (but please see refcount_t):
44 atomic_add_unless(), atomic_inc_not_zero()
45 atomic_sub_and_test(), atomic_dec_and_test()
50 atomic_inc_and_test(), atomic_add_negative()
51 atomic_dec_unless_positive(), atomic_inc_unless_negative()
56 smp_mb__{before,after}_atomic()
59 TYPES (signed vs unsigned)
62 While atomic_t, atomic_long_t and atomic64_t use int, long and s64
63 respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
64 (which implies -fwrapv) and defines signed overflow to behave like
67 Therefore, an explicitly unsigned variant of the atomic ops is strictly
68 unnecessary and we can simply cast, there is no UB.
70 There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
73 With this we also conform to the C/C++ _Atomic behaviour and things like
82 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
83 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
84 smp_store_release() respectively.
86 The one detail to this is that atomic_set{}() should be observable to the RMW
97 atomic_add_unless(v, 1, 0);
108 In this case we would expect the atomic_set() from CPU1 to either happen
109 before the atomic_add_unless(), in which case that latter one would no-op, or
110 _after_ in which case we'd overwrite its result. In no case is "2" a valid
113 This is typically true on 'normal' platforms, where a regular competing STORE
114 will invalidate a LL/SC or fail a CMPXCHG.
116 The obvious case where this is not so is when we need to implement atomic ops
121 atomic_add_unless(v, 1, 0);
123 ret = READ_ONCE(v->counter); // == 1
125 if (ret != u) WRITE_ONCE(v->counter, 0);
126 WRITE_ONCE(v->counter, ret + 1);
129 the typical solution is to then implement atomic_set{}() with atomic_xchg().
134 These come in various forms:
136 - plain operations without return value: atomic_{}()
138 - operations which return the modified value: atomic_{}_return()
140 these are limited to the arithmetic operations because those are
141 reversible. Bitops are irreversible and therefore the modified value
142 is of dubious utility.
144 - operations which return the original value: atomic_fetch_{}()
146 - swap operations: xchg(), cmpxchg() and try_cmpxchg()
148 - misc; the special purpose operations that are commonly used and would,
149 given the interface, normally be implemented using (try_)cmpxchg loops but
150 are time critical and can, (typically) on LL/SC architectures, be more
151 efficiently implemented.
153 All these operations are SMP atomic; that is, the operations (for a single
154 atomic variable) can be fully ordered and no intermediate state is lost or
158 ORDERING (go read memory-barriers.txt first)
163 - non-RMW operations are unordered;
165 - RMW operations that have no return value are unordered;
167 - RMW operations that have a return value are fully ordered;
169 - RMW operations that are conditional are unordered on FAILURE,
170 otherwise the above rules apply.
172 Except of course when an operation has an explicit ordering like:
174 {}_relaxed: unordered
175 {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
176 {}_release: the W of the RMW (or atomic_set) is a RELEASE
178 Where 'unordered' is against other memory locations. Address dependencies are
181 Fully ordered primitives are ordered against everything prior and everything
182 subsequent. Therefore a fully ordered primitive is like having an smp_mb()
183 before and an smp_mb() after the primitive.
188 smp_mb__{before,after}_atomic()
190 only apply to the RMW ops and can be used to augment/upgrade the ordering
191 inherent to the used atomic op. These barriers provide a full smp_mb().
193 These helper barriers exist because architectures have varying implicit
194 ordering on their SMP atomic primitives. For example our TSO architectures
195 provide full ordered atomics and these barriers are no-ops.
203 smp_mb__before_atomic();
204 atomic_fetch_add_relaxed();
205 smp_mb__after_atomic();
207 However the atomic_fetch_add() might be implemented more efficiently.
209 Further, while something like:
211 smp_mb__before_atomic();
214 is a 'typical' RELEASE pattern, the barrier is strictly stronger than
215 a RELEASE. Similarly for something like:
218 smp_mb__after_atomic();
220 is an ACQUIRE pattern (though very much not typical), but again the barrier is
221 strictly stronger than ACQUIRE. As illustrated:
228 P1(int *x, atomic_t *y)
235 P2(int *x, atomic_t *y)
238 smp_mb__after_atomic();
245 This should not happen; but a hypothetical atomic_inc_acquire() --
246 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,