3 * (C) COPYRIGHT 2010-2011 ARM Limited. All rights reserved.
5 * This program is free software and is provided to you under the terms of the GNU General Public License version 2
6 * as published by the Free Software Foundation, and any use by you of this program is subject to the terms of such GNU licence.
8 * A copy of the licence is included with the program, and can also be obtained from Free Software
9 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19 #error "Include mali_osk.h directly"
28 * @addtogroup base_api
33 * @addtogroup base_osk_api
38 * @defgroup osklocks Mutual Exclusion
40 * A read/write lock (rwlock) is used to control access to a shared resource,
41 * where multiple threads are allowed to read from the shared resource, but
42 * only one thread is allowed to write to the shared resources at any one time.
43 * A thread must specify the type of access (read/write) when locking the
44 * rwlock. If a rwlock is locked for write access, other threads that attempt
45 * to lock the same rwlock will block. If a rwlock is locked for read access,
46 * threads that attempts to lock the rwlock for write access, will block until
47 * until all threads with read access have unlocked the rwlock.
49 * @note If an OS does not provide a synchronisation object to implement a
50 * rwlock, a OSK mutex can be used instead for its implementation. This would
51 * only allow one reader or writer to access the shared resources at any one
54 * A mutex is used to control access to a shared resource, where only one
55 * thread is allowed access at any one time. A thread must lock the mutex
56 * to gain access; other threads that attempt to lock the same mutex will
57 * block. Mutexes can only be unlocked by the thread that holds the lock.
59 * @note OSK mutexes are intended for use in a situation where access to the
60 * shared resource is likely to be contended. OSK mutexes make use of the
61 * mutual exclusion primitives provided by the target OS, which often
62 * are considered "heavyweight".
64 * Spinlocks are also used to control access to a shared resource and
65 * enforce that only one thread has access at any one time. They differ from
66 * OSK mutexes in that they poll the mutex to obtain the lock. This makes a
67 * spinlock especially suited for contexts where you are not allowed to block
68 * while waiting for access to the shared resource. A OSK mutex could not be
69 * used in such a context as it can block while trying to obtain the mutex.
71 * A spinlock should be held for the minimum time possible, as in the contended
72 * case threads will not sleep but poll and therefore use CPU-cycles.
74 * While holding a spinlock, you must not sleep. You must not obtain a rwlock,
75 * mutex or do anything else that might block your thread. This is to prevent another
76 * thread trying to lock the same spinlock while your thread holds the spinlock,
77 * which could take a very long time (as it requires your thread to get scheduled
78 * in again and unlock the spinlock) or could even deadlock your system.
80 * Spinlocks are considered 'lightweight': for the uncontended cases, the mutex
81 * can be obtained quickly. For the lightly-contended cases on Multiprocessor
82 * systems, the mutex can be obtained quickly without resorting to
83 * "heavyweight" OS primitives.
85 * Two types of spinlocks are provided. A type that is safe to use when sharing
86 * a resource with an interrupt service routine, and one that should only be
87 * used to share the resource between threads. The former should be used to
88 * prevent deadlock between a thread that holds a spinlock while an
89 * interrupt occurs and the interrupt service routine trying to obtain the same
92 * @anchor oskmutex_spinlockdetails
93 * @par Important details of OSK Spinlocks.
95 * OSK spinlocks are not intended for high-contention cases. If high-contention
96 * usecases occurs frequently for a particular spinlock, then it is wise to
97 * consider using an OSK Mutex instead.
99 * @note An especially important reason for not using OSK Spinlocks in highly
100 * contended cases is that they defeat the OS's Priority Inheritance mechanisms
101 * that would normally alleviate Priority Inversion problems. This is because
102 * once the spinlock is obtained, the OS usually does not know which thread has
103 * obtained the lock, and so cannot know which thread must have its priority
104 * boosted to alleviate the Priority Inversion.
106 * As a guide, use a spinlock when CPU-bound for a short period of time
107 * (thousands of cycles). CPU-bound operations include reading/writing of
108 * memory or registers. Do not use a spinlock when IO bound (e.g. user input,
109 * buffered IO reads/writes, calls involving significant device driver IO
115 * @brief Initialize a mutex
117 * Initialize a mutex structure. If the function returns successfully, the
118 * mutex is in the unlocked state.
120 * The caller must allocate the memory for the @see osk_mutex
121 * structure, which is then populated within this function. If the OS-specific
122 * mutex referenced from the structure cannot be initialized, an error is
125 * The mutex must be terminated when no longer required, by using
126 * osk_mutex_term(). Otherwise, a resource leak may result in the OS.
128 * The mutex is initialized with a lock order parameter, \a order. Refer to
129 * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
132 * It is a programming error to pass an invalid pointer (including NULL)
133 * through the \a lock parameter.
135 * It is a programming error to attempt to initialize a mutex that is
136 * currently initialized.
138 * @param[out] lock pointer to an uninitialized mutex structure
139 * @param[in] order the locking order of the mutex
140 * @return OSK_ERR_NONE on success, any other value indicates a failure.
142 OSK_STATIC_INLINE osk_error osk_mutex_init(osk_mutex * const lock, osk_lock_order order) CHECK_RESULT;
145 * @brief Terminate a mutex
147 * Terminate the mutex pointed to by \a lock, which must be
148 * a pointer to a valid unlocked mutex. When the mutex is terminated, the
149 * OS-specific mutex is freed.
151 * It is a programming error to pass an invalid pointer (including NULL)
152 * through the \a lock parameter.
154 * It is a programming error to attempt to terminate a mutex that is currently
157 * @illegal It is illegal to call osk_mutex_term() on a locked mutex.
159 * @param[in] lock pointer to a valid mutex structure
161 OSK_STATIC_INLINE void osk_mutex_term(osk_mutex * lock);
164 * @brief Lock a mutex
166 * Lock the mutex pointed to by \a lock. If the mutex is currently unlocked,
167 * the calling thread returns with the mutex locked. If a second thread
168 * attempts to lock the same mutex, it blocks until the first thread
169 * unlocks the mutex. If two or more threads are blocked waiting on the first
170 * thread to unlock the mutex, it is undefined as to which thread is unblocked
171 * when the first thread unlocks the mutex.
173 * It is a programming error to pass an invalid pointer (including NULL)
174 * through the \a lock parameter.
176 * It is a programming error to lock a mutex or spinlock with an order that is
177 * higher than any mutex or spinlock held by the current thread. Mutexes and
178 * spinlocks must be locked in the order of highest to lowest, to prevent
179 * deadlocks. Refer to @see oskmutex_lockorder for more information.
181 * It is a programming error to exit a thread while it has a locked mutex.
183 * It is a programming error to lock a mutex from an ISR context. In an ISR
184 * context you are not allowed to block what osk_mutex_lock() potentially does.
186 * @illegal It is illegal to call osk_mutex_lock() on a mutex that is currently
187 * locked by the caller thread. That is, it is illegal for the same thread to
188 * lock a mutex twice, without unlocking it in between.
190 * @param[in] lock pointer to a valid mutex structure
192 OSK_STATIC_INLINE void osk_mutex_lock(osk_mutex * lock);
195 * @brief Unlock a mutex
197 * Unlock the mutex pointed to by \a lock. The calling thread must be the
198 * same thread that locked the mutex. If no other threads are waiting on the
199 * mutex to be unlocked, the function returns immediately, with the mutex
200 * unlocked. If one or more threads are waiting on the mutex to be unlocked,
201 * then this function returns, and a thread waiting on the mutex can be
202 * unblocked. It is undefined as to which thread is unblocked.
204 * @note It is not defined \em when a waiting thread is unblocked. For example,
205 * a thread calling osk_mutex_unlock() followed by osk_mutex_lock() may (or may
206 * not) obtain the lock again, preventing other threads from being
207 * released. Neither the 'immediately releasing', nor the 'delayed releasing'
208 * behavior of osk_mutex_unlock() can be relied upon. If such behavior is
209 * required, then you must implement it yourself, such as by using a second
210 * synchronization primitive.
212 * It is a programming error to pass an invalid pointer (including NULL)
213 * through the \a lock parameter.
215 * @illegal It is illegal for a thread to call osk_mutex_unlock() on a mutex
216 * that it has not locked, even if that mutex is currently locked by another
217 * thread. That is, it is illegal for any thread other than the 'owner' of the
218 * mutex to unlock it. And, you must not unlock an already unlocked mutex.
220 * @param[in] lock pointer to a valid mutex structure
222 OSK_STATIC_INLINE void osk_mutex_unlock(osk_mutex * lock);
225 * @brief Initialize a spinlock
227 * Initialize a spinlock. If the function returns successfully, the
228 * spinlock is in the unlocked state.
230 * @note If the spinlock is used for sharing a resource with an interrupt service
231 * routine, use the IRQ safe variant of the spinlock, see osk_spinlock_irq.
232 * The IRQ safe variant should be used in that situation to prevent
233 * deadlock between a thread/ISR that holds a spinlock while an interrupt occurs
234 * and the interrupt service routine trying to obtain the same spinlock too.
236 * The caller must allocate the memory for the @see osk_spinlock
237 * structure, which is then populated within this function. If the OS-specific
238 * spinlock referenced from the structure cannot be initialized, an error is
241 * The spinlock must be terminated when no longer required, by using
242 * osk_spinlock_term(). Otherwise, a resource leak may result in the OS.
244 * The spinlock is initialized with a lock order parameter, \a order. Refer to
245 * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
248 * It is a programming error to pass an invalid pointer (including NULL)
249 * through the \a lock parameter.
251 * It is a programming error to attempt to initialize a spinlock that is
252 * currently initialized.
254 * @param[out] lock pointer to a spinlock structure
255 * @param[in] order the locking order of the spinlock
256 * @return OSK_ERR_NONE on success, any other value indicates a failure.
258 OSK_STATIC_INLINE osk_error osk_spinlock_init(osk_spinlock * const lock, osk_lock_order order) CHECK_RESULT;
261 * @brief Terminate a spinlock
263 * Terminates the spinlock and releases any associated resources.
264 * The spinlock must be in an unlocked state.
266 * Terminate the spinlock pointed to by \a lock, which must be
267 * a pointer to a valid unlocked spinlock. When the spinlock is terminated, the
268 * OS-specific spinlock is freed.
270 * It is a programming error to pass an invalid pointer (including NULL)
271 * through the \a lock parameter.
273 * It is a programming error to attempt to terminate a spinlock that is currently
276 * @illegal It is illegal to call osk_spinlock_term() on a locked spinlock.
277 * @param[in] lock pointer to a valid spinlock structure
279 OSK_STATIC_INLINE void osk_spinlock_term(osk_spinlock * lock);
282 * @brief Lock a spinlock
284 * Lock the spinlock pointed to by \a lock. If the spinlock is currently unlocked,
285 * the calling thread returns with the spinlock locked. If a second thread
286 * attempts to lock the same spinlock, it polls the spinlock until the first thread
287 * unlocks the spinlock. If two or more threads are polling the spinlock waiting
288 * on the first thread to unlock the spinlock, it is undefined as to which thread
289 * will lock the spinlock when the first thread unlocks the spinlock.
291 * While the spinlock is locked by the calling thread, the spinlock implementation
292 * should prevent any possible deadlock issues arising from another thread on the
293 * same CPU trying to lock the same spinlock.
295 * While holding a spinlock, you must not sleep. You must not obtain a rwlock,
296 * mutex or do anything else that might block your thread. This is to prevent another
297 * thread trying to lock the same spinlock while your thread holds the spinlock,
298 * which could take a very long time (as it requires your thread to get scheduled
299 * in again and unlock the spinlock) or could even deadlock your system.
301 * It is a programming error to pass an invalid pointer (including NULL)
302 * through the \a lock parameter.
304 * It is a programming error to lock a spinlock, rwlock or mutex with an order that
305 * is higher than any spinlock, rwlock, or mutex held by the current thread. Spinlocks,
306 * Rwlocks, and Mutexes must be locked in the order of highest to lowest, to prevent
307 * deadlocks. Refer to @see oskmutex_lockorder for more information.
309 * It is a programming error to exit a thread while it has a locked spinlock.
311 * It is a programming error to lock a spinlock from an ISR context. Use the IRQ
312 * safe spinlock type instead.
314 * @illegal It is illegal to call osk_spinlock_lock() on a spinlock that is currently
315 * locked by the caller thread. That is, it is illegal for the same thread to
316 * lock a spinlock twice, without unlocking it in between.
318 * @param[in] lock pointer to a valid spinlock structure
320 OSK_STATIC_INLINE void osk_spinlock_lock(osk_spinlock * lock);
323 * @brief Unlock a spinlock
325 * Unlock the spinlock pointed to by \a lock. The calling thread must be the
326 * same thread that locked the spinlock. If no other threads are polling the
327 * spinlock waiting on the spinlock to be unlocked, the function returns
328 * immediately, with the spinlock unlocked. If one or more threads are polling
329 * the spinlock waiting on the spinlock to be unlocked, then this function
330 * returns, and a thread waiting on the spinlock can stop polling and continue
331 * with the spinlock locked. It is undefined as to which thread this is.
333 * @note It is not defined \em when a waiting thread continues. For example,
334 * a thread calling osk_spinlock_unlock() followed by osk_spinlock_lock() may (or may
335 * not) obtain the spinlock again, preventing other threads from continueing.
336 * Neither the 'immediately releasing', nor the 'delayed releasing'
337 * behavior of osk_spinlock_unlock() can be relied upon. If such behavior is
338 * required, then you must implement it yourself, such as by using a second
339 * synchronization primitive.
341 * It is a programming error to pass an invalid pointer (including NULL)
342 * through the \a lock parameter.
344 * @illegal It is illegal for a thread to call osk_spinlock_unlock() on a spinlock
345 * that it has not locked, even if that spinlock is currently locked by another
346 * thread. That is, it is illegal for any thread other than the 'owner' of the
347 * spinlock to unlock it. And, you must not unlock an already unlocked spinlock.
349 * @param[in] lock pointer to a valid spinlock structure
351 OSK_STATIC_INLINE void osk_spinlock_unlock(osk_spinlock * lock);
354 * @brief Initialize an IRQ safe spinlock
356 * Initialize an IRQ safe spinlock. If the function returns successfully, the
357 * spinlock is in the unlocked state.
359 * This variant of spinlock is used for sharing a resource with an interrupt
360 * service routine. The IRQ safe variant should be used in this siutation to
361 * prevent deadlock between a thread/ISR that holds a spinlock while an interrupt
362 * occurs and the interrupt service routine trying to obtain the same spinlock
363 * too. If the spinlock is not used to share a resource with an interrupt service
364 * routine, one should use the osk_spinlock instead of the osk_spinlock_irq
365 * variant, see osk_spinlock_init().
367 * The caller must allocate the memory for the @see osk_spinlock_irq
368 * structure, which is then populated within this function. If the OS-specific
369 * spinlock referenced from the structure cannot be initialized, an error is
372 * The spinlock must be terminated when no longer required, by using
373 * osk_spinlock_irq_term(). Otherwise, a resource leak may result in the OS.
375 * The spinlock is initialized with a lock order parameter, \a order. Refer to
376 * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
379 * It is a programming error to pass an invalid pointer (including NULL)
380 * through the \a lock parameter.
382 * It is a programming error to attempt to initialize a spinlock that is
383 * currently initialized.
385 * @param[out] lock pointer to a IRQ safe spinlock structure
386 * @param[in] order the locking order of the IRQ safe spinlock
387 * @return OSK_ERR_NONE on success, any other value indicates a failure.
389 OSK_STATIC_INLINE osk_error osk_spinlock_irq_init(osk_spinlock_irq * const lock, osk_lock_order order) CHECK_RESULT;
392 * @brief Terminate an IRQ safe spinlock
394 * Terminate the IRQ safe spinlock pointed to by \a lock, which must be
395 * a pointer to a valid unlocked IRQ safe spinlock. When the IRQ safe spinlock
396 * is terminated, the OS-specific spinlock is freed.
398 * It is a programming error to pass an invalid pointer (including NULL)
399 * through the \a lock parameter.
401 * It is a programming error to attempt to terminate a IRQ safe pinlock that is
402 * currently terminated.
404 * @param[in] lock pointer to a valid IRQ safe spinlock structure
406 OSK_STATIC_INLINE void osk_spinlock_irq_term(osk_spinlock_irq * lock);
409 * @brief Lock an IRQ safe spinlock
411 * Lock the IRQ safe spinlock (from here on refered to as 'spinlock') pointed to
412 * by \a lock. If the spinlock is currently unlocked, the calling thread returns
413 * with the spinlock locked. If a second thread attempts to lock the same spinlock,
414 * it polls the spinlock until the first thread unlocks the spinlock. If two or
415 * more threads are polling the spinlock waiting on the first thread to unlock the
416 * spinlock, it is undefined as to which thread will lock the spinlock when the
417 * first thread unlocks the spinlock.
419 * While the spinlock is locked by the calling thread, the spinlock implementation
420 * should prevent any possible deadlock issues arising from another thread on the
421 * same CPU trying to lock the same spinlock.
423 * While holding a spinlock, you must not sleep. You must not obtain a rwlock,
424 * mutex or do anything else that might block your thread. This is to prevent another
425 * thread trying to lock the same spinlock while your thread holds the spinlock,
426 * which could take a very long time (as it requires your thread to get scheduled
427 * in again and unlock the spinlock) or could even deadlock your system.
429 * It is a programming error to pass an invalid pointer (including NULL)
430 * through the \a lock parameter.
432 * It is a programming error to lock a spinlock, rwlock or mutex with an order that
433 * is higher than any spinlock, rwlock, or mutex held by the current thread. Spinlocks,
434 * Rwlocks, and Mutexes must be locked in the order of highest to lowest, to prevent
435 * deadlocks. Refer to @see oskmutex_lockorder for more information.
437 * It is a programming error to exit a thread while it has a locked spinlock.
439 * @illegal It is illegal to call osk_spinlock_irq_lock() on a spinlock that is
440 * currently locked by the caller thread. That is, it is illegal for the same thread
441 * to lock a spinlock twice, without unlocking it in between.
443 * @param[in] lock pointer to a valid IRQ safe spinlock structure
445 OSK_STATIC_INLINE void osk_spinlock_irq_lock(osk_spinlock_irq * lock);
448 * @brief Unlock an IRQ safe spinlock
450 * Unlock the IRQ safe spinlock (from hereon refered to as 'spinlock') pointed to
451 * by \a lock. The calling thread/ISR must be the same thread/ISR that locked the
452 * spinlock. If no other threads/ISRs are polling the spinlock waiting on the spinlock
453 * to be unlocked, the function returns* immediately, with the spinlock unlocked. If
454 * one or more threads/ISRs are polling the spinlock waiting on the spinlock to be unlocked,
455 * then this function returns, and a thread/ISR waiting on the spinlock can stop polling
456 * and continue with the spinlock locked. It is undefined as to which thread/ISR this is.
458 * @note It is not defined \em when a waiting thread/ISR continues. For example,
459 * a thread/ISR calling osk_spinlock_irq_unlock() followed by osk_spinlock_irq_lock() may
460 * (or may not) obtain the spinlock again, preventing other threads from continueing.
461 * Neither the 'immediately releasing', nor the 'delayed releasing'
462 * behavior of osk_spinlock_irq_unlock() can be relied upon. If such behavior is
463 * required, then you must implement it yourself, such as by using a second
464 * synchronization primitive.
466 * It is a programming error to pass an invalid pointer (including NULL)
467 * through the \a lock parameter.
469 * @illegal It is illegal for a thread to call osk_spinlock_irq_unlock() on a spinlock
470 * that it has not locked, even if that spinlock is currently locked by another
471 * thread. That is, it is illegal for any thread other than the 'owner' of the
472 * spinlock to unlock it. And, you must not unlock an already unlocked spinlock.
474 * @param[in] lock pointer to a valid IRQ safe spinlock structure
476 OSK_STATIC_INLINE void osk_spinlock_irq_unlock(osk_spinlock_irq * lock);
479 * @brief Initialize a rwlock
481 * Read/write locks allow multiple readers to obtain the lock (shared access),
482 * or one writer to obtain the lock (exclusive access).
483 * Read/write locks are created in an unlocked state.
485 * Initialize a rwlock structure. If the function returns successfully, the
486 * rwlock is in the unlocked state.
488 * The caller must allocate the memory for the @see osk_rwlock
489 * structure, which is then populated within this function. If the OS-specific
490 * rwlock referenced from the structure cannot be initialized, an error is
493 * The rwlock must be terminated when no longer required, by using
494 * osk_rwlock_term(). Otherwise, a resource leak may result in the OS.
496 * The rwlock is initialized with a lock order parameter, \a order. Refer to
497 * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
500 * It is a programming error to pass an invalid pointer (including NULL)
501 * through the \a lock parameter.
503 * It is a programming error to attempt to initialize a rwlock that is
504 * currently initialized.
506 * @param[out] lock pointer to a rwlock structure
507 * @param[in] order the locking order of the rwlock
508 * @return OSK_ERR_NONE on success, any other value indicates a failure.
510 OSK_STATIC_INLINE osk_error osk_rwlock_init(osk_rwlock * const lock, osk_lock_order order) CHECK_RESULT;
513 * @brief Terminate a rwlock
515 * Terminate the rwlock pointed to by \a lock, which must be
516 * a pointer to a valid unlocked rwlock. When the rwlock is terminated, the
517 * OS-specific rwlock is freed.
519 * It is a programming error to pass an invalid pointer (including NULL)
520 * through the \a lock parameter.
522 * It is a programming error to attempt to terminate a rwlock that is currently
525 * @illegal It is illegal to call osk_rwlock_term() on a locked rwlock.
527 * @param[in] lock pointer to a valid rwlock structure
529 OSK_STATIC_INLINE void osk_rwlock_term(osk_rwlock * lock);
532 * @brief Lock a rwlock for read access
534 * Lock the rwlock pointed to by \a lock for read access. A rwlock may
535 * be locked for read access by multiple threads. If the mutex
536 * mutex is not locked for exclusive write access, the calling thread
537 * returns with the rwlock locked for read access. If the mutex is
538 * currently locked for exclusive write access, the calling thread blocks
539 * until the thread with exclusive write access unlocks the rwlock.
540 * If multiple threads are blocked waiting for read access or exclusive
541 * write access to the rwlock, it is undefined as to which thread is
542 * unblocked when the rwlock is unlocked (by the thread with exclusive
545 * It is a programming error to pass an invalid pointer (including NULL)
546 * through the \a lock parameter.
548 * It is a programming error to lock a rwlock, mutex or spinlock with an order that is
549 * higher than any rwlock, mutex or spinlock held by the current thread. Rwlocks, mutexes and
550 * spinlocks must be locked in the order of highest to lowest, to prevent
551 * deadlocks. Refer to @see oskmutex_lockorder for more information.
553 * It is a programming error to exit a thread while it has a locked rwlock.
555 * It is a programming error to lock a rwlock from an ISR context. In an ISR
556 * context you are not allowed to block what osk_rwlock_read_lock() potentially does.
558 * @illegal It is illegal to call osk_rwlock_read_lock() on a rwlock that is currently
559 * locked by the caller thread. That is, it is illegal for the same thread to
560 * lock a rwlock twice, without unlocking it in between.
561 * @param[in] lock pointer to a valid rwlock structure
563 OSK_STATIC_INLINE void osk_rwlock_read_lock(osk_rwlock * lock);
566 * @brief Unlock a rwlock for read access
568 * Unlock the rwlock pointed to by \a lock. The calling thread must be the
569 * same thread that locked the rwlock for read access. If no other threads
570 * are waiting on the rwlock to be unlocked, the function returns
571 * immediately, with the rwlock unlocked. If one or more threads are waiting
572 * on the rwlock to be unlocked for write access, and the calling thread
573 * is the last thread holding the rwlock for read access, then this function
574 * returns, and a thread waiting on the rwlock for write access can be
575 * unblocked. It is undefined as to which thread is unblocked.
577 * @note It is not defined \em when a waiting thread is unblocked. For example,
578 * a thread calling osk_rwlock_read_unlock() followed by osk_rwlock_read_lock()
579 * may (or may not) obtain the lock again, preventing other threads from being
580 * released. Neither the 'immediately releasing', nor the 'delayed releasing'
581 * behavior of osk_rwlock_read_unlock() can be relied upon. If such behavior is
582 * required, then you must implement it yourself, such as by using a second
583 * synchronization primitve.
585 * It is a programming error to pass an invalid pointer (including NULL)
586 * through the \a lock parameter.
588 * @illegal It is illegal for a thread to call osk_rwlock_read_unlock() on a
589 * rwlock that it has not locked, even if that rwlock is currently locked by another
590 * thread. That is, it is illegal for any thread other than the 'owner' of the
591 * rwlock to unlock it. And, you must not unlock an already unlocked rwlock.
593 * @param[in] lock pointer to a valid rwlock structure
595 OSK_STATIC_INLINE void osk_rwlock_read_unlock(osk_rwlock * lock);
598 * @brief Lock a rwlock for exclusive write access
600 * Lock the rwlock pointed to by \a lock for exclusive write access. If the
601 * rwlock is currently unlocked, the calling thread returns with the rwlock
602 * locked. If the rwlock is currently locked, the calling thread blocks
603 * until the last thread with read access or the thread with exclusive write
604 * access unlocks the rwlock. If multiple threads are blocked waiting
605 * for exclusive write access to the rwlock, it is undefined as to which
606 * thread is unblocked when the rwlock is unlocked (by either the last thread
607 * thread with read access or the thread with exclusive write access).
609 * It is a programming error to pass an invalid pointer (including NULL)
610 * through the \a lock parameter.
612 * It is a programming error to lock a rwlock, mutex or spinlock with an order that is
613 * higher than any rwlock, mutex or spinlock held by the current thread. Rwlocks, mutexes and
614 * spinlocks must be locked in the order of highest to lowest, to prevent
615 * deadlocks. Refer to @see oskmutex_lockorder for more information.
617 * It is a programming error to exit a thread while it has a locked rwlock.
619 * It is a programming error to lock a rwlock from an ISR context. In an ISR
620 * context you are not allowed to block what osk_rwlock_write_lock() potentially does.
622 * @illegal It is illegal to call osk_rwlock_write_lock() on a rwlock that is currently
623 * locked by the caller thread. That is, it is illegal for the same thread to
624 * lock a rwlock twice, without unlocking it in between.
626 * @param[in] lock pointer to a valid rwlock structure
628 OSK_STATIC_INLINE void osk_rwlock_write_lock(osk_rwlock * lock);
631 * @brief Unlock a rwlock for exclusive write access
633 * Unlock the rwlock pointed to by \a lock. The calling thread must be the
634 * same thread that locked the rwlock for exclusive write access. If no
635 * other threads are waiting on the rwlock to be unlocked, the function returns
636 * immediately, with the rwlock unlocked. If one or more threads are waiting
637 * on the rwlock to be unlocked, then this function returns, and a thread
638 * waiting on the rwlock can be unblocked. It is undefined as to which
639 * thread is unblocked.
641 * @note It is not defined \em when a waiting thread is unblocked. For example,
642 * a thread calling osk_rwlock_write_unlock() followed by osk_rwlock_write_lock()
643 * may (or may not) obtain the lock again, preventing other threads from being
644 * released. Neither the 'immediately releasing', nor the 'delayed releasing'
645 * behavior of osk_rwlock_write_unlock() can be relied upon. If such behavior is
646 * required, then you must implement it yourself, such as by using a second
647 * synchronization primitve.
649 * It is a programming error to pass an invalid pointer (including NULL)
650 * through the \a lock parameter.
652 * @illegal It is illegal for a thread to call osk_rwlock_write_unlock() on a
653 * rwlock that it has not locked, even if that rwlock is currently locked by another
654 * thread. That is, it is illegal for any thread other than the 'owner' of the
655 * rwlock to unlock it. And, you must not unlock an already unlocked rwlock.
657 * @param[in] lock pointer to a valid read/write lock structure
659 OSK_STATIC_INLINE void osk_rwlock_write_unlock(osk_rwlock * lock);
661 /* @} */ /* end group osklocks */
663 /** @} */ /* end group base_osk_api */
665 /** @} */ /* end group base_api */
667 /* pull in the arch header with the implementation */
668 #include <osk/mali_osk_arch_locks.h>
675 #endif /* _OSK_LOCKS_H_ */